Term-Chat: A ChatGPT API Terminal Interface written in Python
- Skylar Castator
- Jul 3, 2023
- 6 min read
Updated: Jul 5, 2023

Introduction
When ChatGPT came out, I wanted to spend some time learning how to use their API and see how I could use it for every day use. I found one thing I was tired of doing was interacting with the web browser version and or the different interfaces to use the open ai framework. While I was doing this I was also working on improving my terminal interface and reviewing VIM and linking different AI tools to my nvim interface. Once I ran a few of the different packages, I decided to make my own and see how far I could take it. I got a few examples working and finally have been able to have a full tool that can run on multiple platforms. So fork or download the project and give it a go!
GitHub Repository: https://github.com/SkylarCastator/chatgpt-terminal-extension
Installation
First checkout the repository using Git:
git clone https://github.com/SkylarCastator/chatgpt-terminal-extension.git
Make sure to install Python. My venv is using version 3.11 but should work on different versions. Also make sure you have pip setup as part of your python package. Next cd into the project directory and run:
pip install -r requirements.txt
Once this code is ran you should have the venv set up to run the project. You can test by running:
python3 main.py
Once you have tested that you can run the application link it to an alias like the command below.
alias gpt='python3 {path-to-chatgpt-project}/main.py'
On-boarding Process
The on-boarding process will request you to create a token for ChatGPT and to enter it into the prompt. This will load the token into a saved location on your machine. If you need to edit this setting any time, enter {/user} into the prompt and follow the instructions to alter the token file.
Creating the token for ChatGPT can be done by going the the ChatGPT website :
Create an account and then go to the keys page here:
Copy the key and then enter it into the prompt. This will allow you to start using the application.
Basic Usage
Once the application is launched, you will be shown the main title screen and a hint to use the :help command.

You will then see a new line with the ">>" prompt. This is used as a placement to show the user to enter an input. If you enter a question or a basic prompt you will be starting a conversation with ChatGPT as shown below.

You can also enter a command into this prompt. Commands start with a ':' character. You can see all of the available commands using the ':help' reference.

We will go into more detail about these different commands in the rest of the tutorial, but the main ones to remember are:
:help This will show you the available commands at any layer in the application
:chat This will bring you back to the main chat prompt from anywhere in the application
:exit This will exit the application in the terminal
For every instance of the application it manages the history of the chat. So as long as you don't load or start a new conversation, all the information for the chat is saved in a single place which is named after a summary of the original prompt.
History
The next section we will cover is managing the history of previous conversations used in the application. As said previously, all previous conversations are saved locally to the user's machine. To access the history menu, enter ':history' into the main prompt. Once it shows you have accessed the history menu, use the ':help' command to see all of the available options.

As shown above there are several options to manage your previous conversations. The ones we will focus on includes:
:path This will show you the path of the history folder. This contains all of the JSON files for previous conversations.
:load This option allows you to open a previous conversation. This will clear the current conversation and open the previous conversation from where you had left off.
:delete This option gives you the option to delete previous conversations to keep your history clean.
Loading a previous conversation can be very helpful to continue on a train of thought for something that you have been working on. If you wish to go back into the chat without loading a old conversation use the ':chat ' command and you will go back to the original conversation.
Prompt Helper
One useful tool that is included with the main project is the prompt helper. You can access this menu by using the ':prompt' command. If we look at the help menu, we will see the selection below.

Display will only print out all of the available prompts. If you are interested in loading a prompt, use the ':load' command. This will create a menu that you can select an item to start a conversation from.

The current selection includes 150 different prompts that the user can select. The prompts were collected from the repository below. Want to give a shout out to all the contributors to this project. https://github.com/f/awesome-chatgpt-prompts
Getting back to the application, each time a prompt is used, it currently replaces the previous history of that used prompt so be careful! Once you have selected a prompt, it will print the full prompt in the terminal with an example of a use case of that prompt. You can continue the conversation and ask it questions closer to the context of the conversation you would like to have.

Once you have started a conversation, it will save everything in the history folder as shown above.
Developing Your Own Menus
One of the main focuses of making this application was the goal to make the project open source and enable people to make their own extensions to the application. Each of the commands above and their menus are created using a layer of JSON files and python scripts that the application can attach. The plan in the future will have a process of attaching these sub-processes from files located in the user's term-chat folder so that people can make and show off their own extensions without needing to have their code be added to the original repository or enable people having their own private tools. The process will not change much from the original version so I will describe the process below.
The JSON file
This is the how the menu is created. It can contain multiple layers of of menus and it contains all of the necessary inputs for the application.
{
"name": "Prompt",
"prompt": ":prompt",
"log_message": "Welcome to the prompt menu",
"help_message": "Opens the menu to launch conversations based on prompts",
"error_message": "That wasn't a correct prompt, use the :help prompt to find available prompts.",
"prompt_header": "Prompt >>",
"class": "prompt_menu",
"function": "",
"menu_items": [
{
"name": "Display",
"prompt": ":display",
"log_message": "",
"help_message": "Display All Prompts",
"error_message": "That wasn't a correct prompt, use the :help prompt to find available prompts.",
"prompt_header": "",
"class": "prompt_menu",
"function": "list_all_prompts",
"menu_items": []
},
{
"name": "Load",
"prompt": ":load",
"log_message": "",
"help_message": "Load a prompt to be used",
"error_message": "That wasn't a correct prompt, use the :help prompt to find available prompts.",
"prompt_header": "",
"class": "prompt_menu",
"function": "load_prompt",
"menu_items": []
}
]
}
Many of the key variables are self explanatory. One of the most important key variables are the "class" and the "function" key values. This is actually the code the menus will run.
The Python Code
Inside the menu_config.py file, you will see how the project links the menus into the application.
import gpt_terminal.managers.user_menu.user_menu as user_menu
import gpt_terminal.managers.model_menu.model_menu as model_menu
import gpt_terminal.managers.history_menu.history_menu as history_menu
import gpt_terminal.managers.prompt_menu.prompt_menu as prompt_menu
class MenuConfig:
def __init__(self, terminal_interface):
self.user_menu = user_menu.UserMenu(terminal_interface.user_data)
self.model_menu = model_menu.ModelMenu()
self.history_menu = history_menu.HistoryMenu(terminal_interface)
self.prompt_menu = prompt_menu.PromptMenu(terminal_interface)
def return_menu_json_paths(self):
return [
"gpt_terminal/managers/user_menu/user_menu.json",
"gpt_terminal/managers/model_menu/model_menu.json",
"gpt_terminal/managers/history_menu/history_menu.json",
"gpt_terminal/managers/prompt_menu/prompt_menu.json"
]
The application links up each JSON file and a class for the JSON file to reference. For the prompt JSON above, the class reference is the "prompt_menu". If you look at the line referencing the PromptMenu class, it registers it as prompt_menu, the same reference as in the JSON file. This file instantiates the class that the code needs to run and the JSON file that references the class.
Finally the reference "function" calls a function in the referenced class for the display command in the menu it references the "list_all_prompts" function as shown below.
from simple_term_menu import TerminalMenu from gpt_terminal.managers.prompt_menu.prompts_settings import PromptSettings class PromptMenu: def __init__(self, terminal_instance): self.terminal_instance = terminal_instance self.prompt_settings = PromptSettings() def list_all_prompts(self): arr = self.prompt_settings.list_all() print("This is a list of all Prompts") if len(arr) > 0: for prompt in arr: print(prompt) else: print("No previous conversations were found")
Once you have the function declared, you can connect it to any number of scripts or classes you would like. As you can tell, you can get very modular with how you can expand the application. The limit is up to you.
If you have any further recommendations or questions for how this works please feel free to reach out and will be happy to improve the project.
Conclusion
I started using this tool for my every day practices to be able to quickly reference to ChatGPT and other language models. I plan to improve the code more to make it easier to make your own prompts and make the application more customizable. Let me know if there are any issues running the project and if there are any features that you would like to see next.
Best thing to do is open up a few TMUX Panels, launch TermChat and get coding!