Generating SQL for SQLite using Other LLM, Marqo¶
This notebook runs through the process of using the vanna
Python package to generate SQL using AI (RAG + LLMs) including connecting to a database and training. If you're not ready to train on your own database, you can still try it using a sample SQLite database.
Which LLM do you want to use?
-
OpenAI via Vanna.AI (Recommended)Use Vanna.AI for free to generate your queries
-
OpenAIUse OpenAI with your own API key
-
Azure OpenAIIf you have OpenAI models deployed on Azure
-
Mistral via Mistral APIIf you have a Mistral API key
-
[Selected] Other LLMIf you have a different LLM model
Where do you want to store the 'training' data?
-
Vanna Hosted Vector DB (Recommended)Use Vanna.AIs hosted vector database (pgvector) for free. This is usable across machines with no additional setup.
-
ChromaDBUse ChromaDBs open-source vector database for free locally. No additional setup is necessary -- all database files will be created and stored locally.
-
[Selected] MarqoUse Marqo locally for free. Requires additional setup. Or use their hosted option.
-
Other VectorDBUse any other vector database. Requires additional setup.
Setup¶
In [ ]:
%pip install 'vanna[marqo]'
In [ ]:
from vanna.base import VannaBase
from vanna.marqo.marqo import Marqo
In [ ]:
class MyCustomLLM(VannaBase):
def __init__(self, config=None):
pass
def generate_plotly_code(self, question: str = None, sql: str = None, df_metadata: str = None, **kwargs) -> str:
# Implement here
def generate_question(self, sql: str, **kwargs) -> str:
# Implement here
def get_followup_questions_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def get_sql_prompt(self, question: str, question_sql_list: list, ddl_list: list, doc_list: list, **kwargs):
# Implement here
def submit_prompt(self, prompt, **kwargs) -> str:
# Implement here
class MyVanna(Marqo, MyCustomLLM):
def __init__(self, config=None):
Marqo.__init__(self, config={'marqo_url': MARQO_URL, 'marqo_model': MARQO_MODEL})
MyCustomLLM.__init__(self, config=config)
vn = MyVanna()
Which database do you want to query?
In [ ]:
vn.connect_to_sqlite('my-database.sqlite')
Training¶
You only need to train once. Do not train again unless you want to add more training data.
In [ ]:
df_ddl = vn.run_sql("SELECT type, sql FROM sqlite_master WHERE sql is not null")
for ddl in df_ddl['sql'].to_list():
vn.train(ddl=ddl)
In [ ]:
# The following are methods for adding training data. Make sure you modify the examples to match your database.
# DDL statements are powerful because they specify table names, colume names, types, and potentially relationships
vn.train(ddl="""
CREATE TABLE IF NOT EXISTS my-table (
id INT PRIMARY KEY,
name VARCHAR(100),
age INT
)
""")
# Sometimes you may want to add documentation about your business terminology or definitions.
vn.train(documentation="Our business defines OTIF score as the percentage of orders that are delivered on time and in full")
# You can also add SQL queries to your training data. This is useful if you have some queries already laying around. You can just copy and paste those from your editor to begin generating new SQL.
vn.train(sql="SELECT * FROM my-table WHERE name = 'John Doe'")
In [ ]:
# At any time you can inspect what training data the package is able to reference
training_data = vn.get_training_data()
training_data
In [ ]:
# You can remove training data if there's obsolete/incorrect information.
vn.remove_training_data(id='1-ddl')
Asking the AI¶
Whenever you ask a new question, it will find the 10 most relevant pieces of training data and use it as part of the LLM prompt to generate the SQL.
In [ ]:
vn.ask(question=...)
Launch the User Interface¶
In [ ]:
from vanna.flask import VannaFlaskApp
app = VannaFlaskApp(vn)
app.run()
Next Steps¶
Using Vanna via Jupyter notebooks is great for getting started but check out additional customizable interfaces like the