How to Install Lobe Chat - Multi LLM Frontend Framework

Overview
Most developers start with the
OpenAI APIto develop using LLM. The OpenAI API operates separately fromChatGPTand allows the use of the latest model,GPT-4 Turbo(equivalent togpt-4-turbo-2024-04-09as of December 2023).Developers using the OpenAI API often feel the need for a frontend similar to ChatGPT. This article introduces
Lobe Chat, an open-source frontend that allows the OpenAI API to be used like ChatGPT. In my case, I install it locally for convenient, constant use.The advantage of Lobe Chat is that it provides an integrated frontend for almost all models worldwide. Just by entering the API Key, you can conveniently use it with a unified UI, which is highly recommended.
Lobe Chat Installation
- Lobe Chat is very conveniently installed as it provides a Docker image. It does not require a separate database, and the memory usage during service operation is around 150MB.
$ mkdir lobe-chat
$ cd lobe-chat
# Write Docker Compose file
$ nano docker-compose.yml
services:
lobe-chat:
image: lobehub/lobe-chat:latest
container_name: lobe-chat
restart: always
ports:
- '3210:3210'
environment:
OPENAI_API_KEY: {your-openai-api-key}
ACCESS_CODE: {your-password}
# Install and run Lobe Chat service
$ docker-compose up -d
Running Lobe Chat
- You can access Lobe Chat by navigating to
http://localhost:3210in a web browser.
Using Amazon Bedrock Claude 3.7 Sonnet
- You can add
Amazon Bedrock Claude 3.7 Sonnetto the list of available models as follows:
Lobe Chat
→ [Settings]
→ [Language Model]
→ [Amazon Bedrock]
→ AWS Access Key Id: ***** (Enter)
→ AWS Secret Access Key: ***** (Enter)
→ AWS Region: Select [us-west-2]
→ Use Client-Side Fetching Mode: Select [On]
→ Model List: us.anthropic.claude-3-7-sonnet-20250219-v1:0 (Enter)




