LM Studio
Integrate LM Studio with Jan
LM Studio enables you to explore, download, and run local Large Language Models (LLMs). You can integrate Jan with LM Studio using two methods:
- Integrate the LM Studio server with Jan UI
- Migrate your downloaded model from LM Studio to Jan.
To integrate LM Studio with Jan follow the steps below:
In this guide, we're going to show you how to connect Jan to LM Studio using the second method. We'll use the Phi 2 - GGUF model from Hugging Face as our example.
Step 1: Server Setup
-
Access the
Local Inference Serverwithin LM Studio. -
Select your desired model.
-
Start the server after configuring the port and options.
-
Update the
openai.jsonfile in~/jan/enginesto include the LM Studio server's full URL.
{
"full_url": "http://localhost:<port>/v1/chat/completions"
}
Replace (port) with your chosen port number. The default is 1234.
Step 2: Model Configuration
- Navigate to
~/jan/models. - Create a folder named
lmstudio-(modelname), likelmstudio-phi-2. - Inside, create a
model.jsonfile with these options:- Set
formattoapi. - Specify
engineasopenai. - Set
statetoready.
- Set
{
"sources": [
{
"filename": "phi-2-GGUF",
"url": "https://huggingface.co/TheBloke/phi-2-GGUF"
}
],
"id": "lmstudio-phi-2",
"object": "model",
"name": "LM Studio - Phi 2 - GGUF",
"version": "1.0",
"description": "TheBloke/phi-2-GGUF",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "Microsoft",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
For more details regarding the model.json settings and parameters fields, please see here.
Step 3: Starting the Model
- Restart Jan and proceed to the Hub.
- Locate your model and click Use to activate it.
Migrating Models from LM Studio to Jan (version 0.4.6 and older)
Step 1: Model Migration
- In LM Studio, navigate to
My Modelsand access your model folder. - Copy the model folder to
~/jan/models. - Ensure the folder name matches the model name in the
.gguffilename. Rename as necessary.
Step 2: Activating the Model
- Restart Jan and navigate to the Hub, where the model will be automatically detected.
- Click Use to use the model.
Direct Access to LM Studio Models from Jan (version 0.4.7+)
Starting from version 0.4.7, Jan enables direct import of LM Studio models using absolute file paths.
Step 1: Locating the Model Path
- Access
My Modelsin LM Studio and locate your model folder. - Obtain the absolute path of your model.
Step 2: Model Configuration
- Go to
~/jan/models. - Create a folder named
(modelname)(e.g.,phi-2.Q4_K_S). - Inside, craft a
model.jsonfile:- Set
idto match the folder name. - Use the direct binary download link ending in
.ggufas theurl. You can now use the absolute filepath of the model file. - Set
engineasnitro.
- Set
{
"object": "model",
"version": 1,
"format": "gguf",
"sources": [
{
"filename": "phi-2.Q4_K_S.gguf",
"url": "<absolute-path-of-model-file>"
}
],
"id": "phi-2.Q4_K_S",
"name": "phi-2.Q4_K_S",
"created": 1708308111506,
"description": "phi-2.Q4_K_S - user self import model",
"settings": {
"ctx_len": 4096,
"embedding": false,
"prompt_template": "{system_message}\n### Instruction: {prompt}\n### Response:",
"llama_model_path": "phi-2.Q4_K_S.gguf"
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 2048,
"stop": ["<endofstring>"],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"size": 1615568736,
"author": "User",
"tags": []
},
"engine": "nitro"
}
For Windows users, ensure to include double backslashes in the URL property, such as C:\\Users\\username\\filename.gguf.
Step 3: Starting the Model
- Restart Jan and proceed to the Hub.
- Locate your model and click Use to activate it.