In this sample we will using KDB.AI’s qHNSW index for storage and retrieval of document vector embeddings. Since the qHNSW index is stored on-disk rather than in-memory, it has extremely low memory footprint for both insertion into the index and vector search. This is a great option for memory constrained environments such as edge devices.
qHNSW offers several benefits over traditional vector indices:
- Reduced Memory Footprint: Data inserts have a much smaller memory footprint than existing HNSW indexes.
- Incremental Disk Access: Data searches read from disk incrementally, keeping memory utilization extremely low.
- Cost Effectiveness: On-disk storage is generally less expensive, and has less power consumption than in-memory storage.
- Improved Scalability: With qHNSW, users can create as many indexes as there is space for on disk and search all at once.
See a full sample in our GitHub repository, or open the code directly in Google Colab.