All documents
You can connect to your GPU Container using a few different methods, depending on your specific needs, preferences, and the template used to create the container.
Connecting to a container using HTTP is convenient, quick, and secure via HTTPS. To connect using the HTTP Service:
Template | Jupyter, Code Server | Ollama WebUI | Ollama | vLLM |
---|---|---|---|---|
Pre-condition | None | None | None | Hugging Face Token (*) |
Next steps |
|
|
Testing your container using Postman (*) | Testing your container using Postman (*) |
(*) Hugging Face Token: Hugging Face Token in Enviroment Variable section is required when using Ollama template. If you do not have Hugging Face Token yet, please follow this guide.
(*) Testing container by using Postman: Append /v1/models to your endpoint, then provide your API_TOKEN in the Authorization. If you're using the Vllm template, also include HUGGING_FACE_HUB_TOKEN in the request parameters to test your container.
ssh [email protected] -p 34771 ~/.ssh/id_e25595
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checbox-analytics | 11 months | |
cookielawinfo-checbox-functional | 11 months | |
cookielawinfo-checbox-others | 11 months | |
cookielawinfo-checkbox-necessary | 11 months | |
cookielawinfo-checkbox-performance | 11 months | |
viewed_cookie_policy | 11 months |