Back to Templates

Vector Database as a Big Data Analysis Tool for AI Agents [1/3 anomaly][1/2 KNN]

Created by

Created by: Jenny  || mrscoopers

Jenny

Last update

Last update 6 months ago

Categories

Share


Vector Database as a Big Data Analysis Tool for AI Agents

Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n".

This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases.

  1. Uploading (image) datasets to Qdrant
  2. Set up meta-variables for anomaly detection in Qdrant
  3. Anomaly detection tool
  4. KNN classifier tool

For anomaly detection

1. This is the first pipeline to upload an image dataset to Qdrant.
2. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection.
3. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset.

For KNN (k nearest neighbours) classification

1. This is the first pipeline to upload an image dataset to Qdrant.
2. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset.

To recreate both

You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage.

[This workflow] Batch Uploading Images Dataset to Qdrant

This template imports dataset images from Google Could Storage, creates Voyage AI embeddings for them in batches, and uploads them to Qdrant, also in batches. In this particular template, we work with crops dataset. However, it's analogous to uploading lands dataset, and in general, it's adaptable to any dataset consisting of image URLs (as the following pipelines are).

  • First, check for an existing Qdrant collection to use; otherwise, create it here. Additionally, when creating the collection, we'll create a payload index, which is required for a particular type of Qdrant requests we will use later.
  • Next, import all (dataset) images from Google Cloud Storage but keep only non-tomato-related ones (for anomaly detection testing).
  • Create (per batch) embeddings for all imported images using the Voyage AI multimodal embeddings API.
  • Finally, upload the resulting embeddings and image descriptors to Qdrant via batch upload.