Skip to content

Commit ff7f929

Browse files
committed
Merge branch 'feature/78-jenkins-authentication' of https://github.com/Yugansh5013/resources-ai-chatbot-plugin into feature/78-jenkins-authentication
2 parents 9adb65c + 469e036 commit ff7f929

4 files changed

Lines changed: 78 additions & 3 deletions

File tree

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,14 @@ curl -X POST http://127.0.0.1:8000/api/chatbot/sessions
5151

5252
See [docs/README.md](docs/README.md) for detailed explanations.
5353

54+
## 🎥 Setup Video Tutorial
55+
56+
[![Local Setup Video Tutorial](https://img.youtube.com/vi/1DnMNA4aLyE/0.jpg)](https://youtu.be/1DnMNA4aLyE)
57+
58+
The tutorial shows how to fork the repo, set up the backend, download the LLM model, run the frontend, and verify the chatbot works.
59+
60+
61+
5462
## Troubleshooting
5563

5664
**llama-cpp-python installation fails**: Ensure build tools are installed and use Python 3.11+

chatbot-core/api/services/chat_service.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@
1616
RETRIEVER_AGENT_PROMPT,
1717
SPLIT_QUERY_PROMPT,
1818
)
19-
from api.services.memory import get_session
19+
20+
from api.services.memory import get_session, get_session_async
2021
from api.services.file_service import format_file_context
2122
from api.tools.tools import TOOL_REGISTRY
2223
from api.tools.utils import (
@@ -462,7 +463,8 @@ async def get_chatbot_reply_stream(
462463
logger.info("Streaming message from session '%s'", session_id)
463464
logger.info("Handling user query: %s", user_input)
464465

465-
memory = get_session(session_id)
466+
memory = await get_session_async(session_id)
467+
466468
if memory is None:
467469
raise RuntimeError(
468470
f"Session '{session_id}' not found in memory store.")

chatbot-core/api/services/memory.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Handles in-memory chat session state (conversation memory).
33
Provides utility functions for session lifecycle.
44
"""
5-
5+
import asyncio
66
import uuid
77
import logging
88
from datetime import datetime, timedelta
@@ -152,6 +152,12 @@ def get_session(session_id: str) -> ConversationBufferMemory | None:
152152

153153
return memory
154154

155+
async def get_session_async(session_id: str) -> ConversationBufferMemory | None:
156+
"""
157+
Async wrapper for get_session to prevent event loop blocking.
158+
"""
159+
return await asyncio.to_thread(get_session, session_id)
160+
155161

156162
def persist_session(session_id: str)-> None:
157163
"""

docs/setup.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,15 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
4343
```bash
4444
pip install -r requirements.txt
4545
```
46+
47+
> **Note:** The backend requires `python-multipart` for multipart form handling.
48+
> This dependency is included in the requirements file, but if you encounter
49+
> runtime errors related to multipart requests, ensure it is installed:
50+
>
51+
> ```bash
52+
> pip install python-multipart
53+
> ```
54+
4655
5. **Set the `PYTHONPATH` to the current directory(`chatbot-core/`)**
4756
```bash
4857
export PYTHONPATH=$(pwd)
@@ -57,6 +66,14 @@ For the setup instructions have been provided for *Linux* and *Windows*. Moreove
5766
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
5867
* Place the downloaded file in `api\models\mistral\`
5968
69+
By default, the backend attempts to load the local GGUF model during
70+
startup. If the model file is missing, the server will fail to start.
71+
72+
Contributors who do not need local inference can run the backend
73+
without a model by using test mode
74+
(see “Running without a local LLM model (test mode)” below).
75+
76+
6077
## Installation Guide for Windows
6178
This guide provides step-by-step instructions for installing and running the Jenkins Chatbot on Windows systems.
6279
@@ -103,6 +120,14 @@ This guide provides step-by-step instructions for installing and running the Jen
103120
```bash
104121
pip install -r requirements-cpu.txt
105122
```
123+
> **Note:** The backend requires `python-multipart` for multipart form handling.
124+
> This dependency is included in the requirements file, but if you encounter
125+
> runtime errors related to multipart requests, ensure it is installed:
126+
>
127+
> ```powershell
128+
> pip install python-multipart
129+
> ```
130+
106131
> **Note**: If you encounter any dependency issues, especially with NVIDIA packages, use the `requirements-cpu.txt` file which excludes GPU-specific dependencies.
107132
108133
5. **Set the PYTHONPATH**
@@ -123,6 +148,13 @@ This guide provides step-by-step instructions for installing and running the Jen
123148
* Download the file named `mistral-7b-instruct-v0.2.Q4_K_M.gguf`
124149
* Place the downloaded file in `api\models\mistral\`
125150
151+
By default, the backend attempts to load the local GGUF model during
152+
startup. If the model file is missing, the server will fail to start.
153+
154+
Contributors who do not need local inference can run the backend
155+
without a model by using test mode
156+
(see “Running without a local LLM model (test mode)” below).
157+
126158
## Automatic setup
127159
128160
To avoid running all the steps each time, we have provided a target in the `Makefile` to automate the setup process.
@@ -141,6 +173,33 @@ make setup-backend IS_CPU_REQ=1
141173
142174
> **Note:** The target **does not** include the installation of the LLM.
143175
176+
### What does `setup-backend` do?
177+
178+
The `setup-backend` Makefile target prepares the Python backend by:
179+
- Creating a virtual environment in `chatbot-core/venv`
180+
- Installing backend dependencies from `requirements.txt`
181+
(or `requirements-cpu.txt` when `IS_CPU_REQ=1` is set)
182+
183+
You usually do not need to run this manually.
184+
The `make api` target automatically runs `setup-backend`
185+
if the backend has not already been set up.
186+
187+
## Running without a local LLM model (test mode)
188+
189+
By default, the backend loads a local GGUF model on startup.
190+
For contributors who do not need local inference, a test configuration
191+
is available.
192+
193+
The backend includes a `config-testing.yml` file that disables local
194+
LLM loading. This configuration is activated when the
195+
`PYTEST_VERSION` environment variable is set.
196+
197+
Example:
198+
199+
```bash
200+
PYTEST_VERSION=1 make api
201+
```
202+
144203
## Common Troubleshooting
145204
146205
This section covers common issues encountered during setup, especially when installing

0 commit comments

Comments
 (0)