Tutorial

Build and Deploy Your Personal Terminal ChatGPT Bot in Python with OpenAI APIs

Published on August 26, 2024

Sr Technical Writer

Build and Deploy Your Personal Terminal ChatGPT Bot in Python with OpenAI APIs

Introduction

Conversational bots have become increasingly popular, providing an interactive way for users to engage with technology. OpenAI’s “GPT” (Generative Pre-trained Transformer) models enable developers to create sophisticated conversational agents.

In this tutorial, you will build and deploy your personal terminal ChatGPT bot using Python and OpenAI APIs on a DigitalOcean Droplet running Ubuntu.

By the end of this tutorial, you will have a fully functional bot that can handle user queries directly from the terminal, offering an engaging and dynamic user experience. Whether you’re a seasoned developer or just starting, this tutorial will equip you with the knowledge to harness the power of ChatGPT in your projects and build your own custom AI bots.

Prerequisites

Before diving into the implementation, ensure you have the following:

Step 1 - Setting Up the environment

In this step, you will set up the environment to build and deploy your ChatGPT terminal bot on a DigitalOcean Droplet running Ubuntu.

Creating a DigitalOcean Droplet

Login to your DigitalOcean Account.

Now, let’s create a Droplet:

  • Navigate to the Droplets section.
  • Click on “Create Droplet.”
  • Choose the Ubuntu operating system (preferably the latest LTS version).
  • Select your preferred plan based on your requirements.
  • Choose a data center region.
  • Add your SSH keys for secure access.
  • Click “Create Droplet.”

Create a Droplet

Connecting to Your Droplet

On your local machine, open a terminal. Use the command below, replacing <your_droplet_ip> with your Droplet’s IP address:

ssh root@<your_droplet_ip>

Setting Up Python Environment

Run the following commands to ensure your system is up-to-date:

sudo apt update
sudo apt upgrade

Install Python and pip using the following commands:

sudo apt install python3 python3-pip

Let’s install virtualenv to create isolated Python environments:

sudo pip3 install virtualenv

Navigate to your desired directory and create a project folder:

mkdir my_chatgpt_bot
cd my_chatgpt_bot

Create and activate a virtual environment:

virtualenv venv
source venv/bin/activate

Install Required Python Packages

Install the openai package and any other dependencies:

pip install openai

Configuring the OpenAI API Key

First, obtain Your OpenAI API Key.

Now, let’s set the Environment Variables:

Store your API key securely in an environment variable. Open the .bashrc or .bash_profile file and add:

export OPENAI_API_KEY='your-api-key-here'

Reload the environment variables:

source ~/.bashrc

Confirm that you have set your environment variable using the following command from the terminal.

echo $OPENAI_API_KEY

With the environment set up, you can start developing your ChatGPT bot. In the next step, we will write the bot’s code to handle user queries and interact with the OpenAI API.

Step 2 - Building the ChatGPT Bot

Now that we have set up the environment let’s build the ChatGPT bot. You will use the legacy gpt-turbo-3.5 model.

Here, you will use the three crucial libraries- openai,textract, and glob to implement this.

OpenAI is a leading artificial intelligence research organization that has developed the ChatGPT API, which allows us to interact with the powerful ChatGPT model. With the OpenAI API, you can send prompts and receive responses from the ChatGPT model, enabling you to create conversational chatbots. You can learn more about OpenAI and its offerings here.

The second textract Python library package provides text extraction capabilities from various file formats. It supports a wide range of file formats, including but not limited to:

  • Text-based formats: TXT, CSV, JSON, XML, HTML, Markdown, and LaTeX.
  • Document formats: DOC, DOCX, XLS, XLSX, PPT, PPTX, ODT, and ODS.
  • eBook formats: EPUB, MOBI, AZW, and FB2.
  • Image formats with embedded text: JPG, PNG, BMP, GIF, TIFF, and PDF (both searchable and scanned).
  • Programming source code files: Python, C, C++, Java, JavaScript, PHP, Ruby, and more.

The glob package in Python is a built-in module that provides a convenient way to search for files and directories using pattern matching. It allows you to find files matching a specified pattern, such as all files with a particular extension or specific naming patterns. You will also use the terminal bot to provide answers based on the data we feed on our local system inside the /data directory in your project’s folder.

Next, let’s install the required textract Python library, as it does not come pre-installed:

pip install textract

Now create a new file called mygptbot.py and copy and paste the below code:

vi mygptbot.py
mygptbot.py
import os
import glob
import openai
import textract

class Chatbot:
    def __init__(self):
        self.openai_api_key = os.getenv("OPENAI_API_KEY")
        self.chat_history = [] # chat_history list to keep the chat in memory of the model

    def append_to_chat_history(self, message): #appends the user's message to the chat history stored in the chat_history list.
        self.chat_history.append(message)

    def read_personal_file(self, file_path):
        try:
            text = textract.process(file_path).decode("utf-8") # convert the content inside files to plain text 
            return text
        except Exception as e:
            print(f"Error reading file {file_path}: {e}")
            return ""

    def collect_user_data(self): # collect local personal data to feed the model
        data_directory = "./data"
        data_files = glob.glob(os.path.join(data_directory, "*.*")) # The function "glob.glob(os.path.join(data_directory, "*.*")) 
        # is used to retrieve a list of file paths that match a specified pattern within a given directory. 
        # In this case *.*", which matches all files with any extension.

        user_data = ""
        for file in data_files:
            file_extension = os.path.splitext(file)[1].lower()
            if file_extension in (".pdf", ".docx", ".xlsx", ".xls"): #checks for the file extension
                user_data += self.read_personal_file(file) # convert the content of files to plain text and append
            else:
                with open(file, "r", encoding="utf-8") as f: # the "with" statement is used to simplify exception handling and resource management when working with files
                    user_data += f.read() + "\n"      
                                               # the "open" function is called with the file name and mode ('r' for read). The returned file object is then assigned to the variable `user_data`. 
                                                # The code inside the "with" block reads the contents of the file and prints it
        return user_data

    def create_chat_response(self, message):
        self.append_to_chat_history(message) # appends the user's message to the chat history stored in the chat_history list.

        user_data = self.collect_user_data()
        messages = [
            {"role": "system", "content": "You are the most helpful assistant."}, # provides high-level instructions or context-setting messages
            {"role": "user", "content": message}, # user” role represents the messages or queries from the user
            {"role": "assistant", "content": message}, # “assistant” role represents the responses generated by the ChatGPT model
        ]

        if user_data:
            messages.append({"role": "user", "content": user_data})

        # the main function that runs the ChatGPT model on the given parameters
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages,
            temperature=0.7,
            max_tokens=256,
            top_p=0.9,
            n=2,
            stop=None,
            frequency_penalty=0.9,
            presence_penalty=0.9
        )

        self.append_to_chat_history(response.choices[0].message.content.strip()) 
        return response.choices[0].message.content.strip() 
        # add responses by model to model's memory/chat history to make the bot more interactive and intelligent

    def start_chatting(self):
        while True:
            user_input = input("User: ")
            if user_input.lower() == "exit":
                print("Chatbot: Goodbye boss, have a wonderful day ahead!")
                break
            bot_response = self.create_chat_response(user_input)
            print("Chatbot:", bot_response)

# Create an instance of the Chatbot class and start the conversation
chatbot = Chatbot()
chatbot.start_chatting()

Firstly, the model’s parameters, in a nutshell, do this:

  • Temperature: Controls the randomness of the responses. Higher values (e.g., 1.0) make the output more diverse, while lower values (e.g., 0.2) make it more focused and deterministic.

  • Max Tokens: Limits the length of the response generated by the model.

  • Top P: Specifies the cumulative probability threshold for choosing the next token. Higher values (e.g., 0.9) result in more focused responses.

  • N: This variable determines the number of different responses generated by the model, which helps explore different possibilities.

  • Stop: Allows us to specify a stopping phrase to indicate the end of the response.

  • Frequency Penalty: Controls the model’s likelihood of repeating the same response.

  • Presence Penalty: Controls how much the model considers using a token that hasn’t been mentioned in the conversation.

You can find more about fine-tuning these parameters here.

Secondly, the functions defined above do the following:

  • append_to_chat_history(message): This function appends the user’s message to the chat history stored in the chat_history list.

  • read_personal_file(file_path): This function utilizes the textractLibrary to extract text from personal files. It attempts to decode the extracted text using UTF-8 encoding. An error message is displayed if any errors occur during the extraction process.

  • collect_user_data(): This function collects the user’s data stored in the “/data” directory, placed inside the current working directory. It iterates through the files in the directory, determines their file types, and uses the appropriate method to extract text. It returns the combined user data as a string. The function glob.glob(os.path.join(data_directory, “.”)) is used to retrieve a list of file paths that match a specified pattern within a given directory. In this case, . finds all files with any extension.

  • create_chat_response(message): This function constructs the chat response using the OpenAI ChatCompletion API. It appends the user’s message and the collected user data (if any) to the message list. The API call is made with the provided messages, and the response is stored in the response variable. The function then appends the response to the chat history and returns it.

  • start_chatting(): This function initiates an interactive chat session with the user. It prompts the user for input, generates the bot’s response using create_chat_response(), and prints the response. The conversation continues until the user enters “exit” to quit.

In the end, the while true loop continuously prompts the user for input. To exit the chatbot, type "exit."

Step 3 - Running the GPT Bot on Your Terminal

You’ll need to open your Droplet’s console and then go ahead and execute the Python file. In your Droplet’s console, run the following command:

python chatGPTbot.py  

Running the chatgpt bot

This is how you can easily interact with the ChatGPT bot and use it for multitasking, asking questions, and much more.

Your personal ChatGPT bot is now ready to chat. Start interacting with it by entering messages; the bot will respond accordingly. When you’re finished, type “exit” to end the conversation.

Conclusion

You have learned how to create and deploy a powerful ChatGPT bot on your Ubuntu machine using Python. The provided code allows your bot to consider and utilize personal user data from various file formats, enabling a more personalized user experience. You can integrate it with other platforms or build a web-based chatbot. With the versatility of ChatGPT and the simplicity of Python, the possibilities are endless.

Feel free to customize further and enhance your bot’s capabilities.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar

Sr Technical Writer

Sr. Technical Writer@ DigitalOcean | Medium Top Writers(AI & ChatGPT) | 2M+ monthly views & 34K Subscribers | Ex Cloud Consultant @ AMEX | Ex SRE(DevOps) @ NUTANIX

Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
Leave a comment


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more