Unleash the Power of LLMs from Your Terminal

2a240a5 Add documentation overview

7 months ago

f921fb0 Restore README.md (copy of doc/10_index.md)

7 months ago



Welcome to AIChatFlow, an easy-to-use, down-to-earth toolkit for prototyping, testing, and deploying AI-based assistants.

AIChatFlow enables you to:

  1. Interact with OpenAI's GPT models directly from your terminal and manage your conversations stored in a SQLite database.
  2. Craft a powerful AI-based assistant using advanced prompt engineering techniques.
  3. Integrate your assistant with both external network services and programs installed on your machine.
  4. Use the AIChatFlow library to embed the assistant into your programs.
  5. Test your assistant locally, then deploy it as a scalable, multi-user, multi-tenant service.

For a more detailed discussion of supported use cases, see the Use Cases chapter.

#Project Status

AIChatFlow is still in its early phase, so don't be surprised if you encounter a few bugs or witness interface changes from time to time, particularly within the library API (the command-line interface is more stable). Despite these issues, the project already offers a great deal of usability and is being used on a daily basis. You are invited to try it and contribute feedback, which is highly valuable and will greatly accelerate the development process.


We provide the community with a mailing list to discuss all aspects of the AIChatFlow project. Through it, you can ask questions, report bugs, request new features, and offer your feedback. For information on how to join the conversation, see the dedicated page.


Project documentation can be found on a dedicated web page. We recommend starting with the Basic Usage Examples and Conversation Templates Quick Start Guide sections. For a more hands-on approach, there is also an Installation and Configuration section available. Further details can be found in the Conversation Templates Reference, CLI reference, and External Services Integration sections.