[LlamaIndex](https://docs.llamaindex.ai/en/stable/) is the framework for Context-Augmented (e.g., [[Retrieval Augmented Generation|RAG]]) LLM Applications.
```bash
uv add llama-index
```
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
```