The most pragmatic technological option to achieve our goal.

First, we download the entire lex4you website using a crawler and divide the information into units that the large language model (LLM) can easily digest. These segments are vectorised using embeddings and stored in a vector database. These embeddings enable us to quickly find the documents applicable to a question that has been asked, and these are then sent to the LLM to obtain an answer, together with the original question. Finally, we supply the answer with the references. This approach is called retrieval-augmented generation (RAG).

These references are extremely useful. Given that an LLM can sometimes ā€˜hallucinateā€™ and invent things, this information enables users to check the statements made. Furthermore, basing the tool on a limited pool of information rather than the entire internet reduces the risk of hallucinations (like ChatGPT version 4, which generally hallucinates less; our system is currently based on the faster ChatGPT 3.5). In any event, we strongly recommend checking the answers provided by reading the articles linked to the response, a recommendation that applies to any use of ChatGPT or another LLM.

Technical datasheet

Retrieving and indexing content:

  • NestJS: back-end
  • Vuejs: very simple front-end application to send the question to the back-end and display the result
  • PostgreSQL: database (with pgvector extension)
  • SimpleCrawler: browses the entire website and enters data into the database
  • Cheerio: extracts relevant content and integrates it into the database
  • OpenAI integration API: receives relevant extracts so that we can store the integrations received in the database

Interrogating content:

  • OpenAI integration API: generates an integration vector when it is sent a question
  • Vector: searches the database to extract parts of texts and URLs
  • Prompt: includes extracts until the size limit has been reached
  • OpenAI ā€˜createChatCompletionā€™ API: handles the information received to send the results to the browser using Server-Sent Events (all of the useful links found in our database are also displayed as references and sources)

What about data protection?

Despite all of these advantages, ChatGPT does not offer full transparency in terms of data use. To remedy this issue, the data and documents on the lex4you website are not stored in OpenAI, but solely on our servers. We take data protection extremely seriously and also examine alternative hosting solutions, such as Azure OpenAI, that offer improved data protection policies. Another option would be to dispense with the ChatGPT cloud solution. However, as it currently stands, open-source LLM options involve significant initial and operational costs.

The lex4youGPT solution is new evidence of GPT's powerful ability to simply improve daily life and make information of public interest more accessible. This is all thanks to the huge amount of work performed upstream by the lex4you team over the past five years, as they are the ones who wrote the high-quality content that enables lex4youGPT to function correctly.

Does your website contain valuable information that could be easier to find? Get in touch with us; we can help you develop a ChatGPT-based chatbot.