HTTP Request node
Google Drive node
+4

Automated End-to-End Fine-Tuning of OpenAI Models with Google Drive Integration

Published 4 hours ago

Created by

n3witalia
n3w Italia

Categories

Template description

1. How it Works

This n8n workflow automates fine-tuning OpenAI models through these key steps:

  • Manual Trigger:
    • Starts with the "When clicking ‘Test workflow’" event to initiate the process.
    • Downloads a .jsonl file from Google Drive
  • Upload to OpenAI:
    • Uploads the .jsonl file to OpenAI via the "Upload File" node (with purpose "fine-tune").
  • Create Fine-tuning Job:
    • Sends a POST request to the endpoint https://api.openai.com/v1/fine_tuning/jobs with:
      {  
        "training_file": "{{ $json.id }}",  
        "model": "gpt-4o-mini-2024-07-18"  
      }  
      
    • OpenAI automatically starts training the model based on the provided file.
  • Interaction with the Trained Model:
    • An "AI Agent" uses the custom model (e.g., ft:gpt-4o-mini-2024-07-18:n3w-italia::XXXX7B) to respond to chat messages.

2. Set up Steps

To configure the workflow:

  1. Prepare the Training File:

    • Create a .jsonl file following the specified syntax (e.g., travel assistant Q/A examples).
    • Upload it to Google Drive and update the ID in the "Google Drive" node.
  2. Configure Credentials:

    • Google Drive: Connect an account via OAuth2 (googleDriveOAuth2Api).
    • OpenAI: Add your API key in the "OpenAI Chat Model" and "Upload File" nodes.
  3. Customize the Model:

    • In the "OpenAI Chat Model" node, specify the name of your fine-tuned model (e.g., ft:gpt-4o-mini-...).
    • Update the HTTP request body (Create Fine-tuning Job) if needed (e.g., a different base model).
  4. Start the Workflow:

    • Use the manual trigger ("Test workflow") to begin the upload and training process.
    • Test the model via the "Chat Trigger" (chat messages).
  5. Integrated Documentation:

    • Follow the instructions in the Sticky Notes to:
      • Properly format the .jsonl (Step 1).
      • Monitor progress on OpenAI (Step 2, link: https://platform.openai.com/finetune/).

Note: Ensure the .jsonl file adheres to OpenAI’s required structure and that credentials are valid.

Share Template

More AI workflow templates

OpenAI Chat Model node
SerpApi (Google Search) node

AI agent chat

This workflow employs OpenAI's language models and SerpAPI to create a responsive, intelligent conversational agent. It comes equipped with manual chat triggers and memory buffer capabilities to ensure seamless interactions. To use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Merge node
+7

Scrape and summarize webpages with AI

This workflow integrates both web scraping and NLP functionalities. It uses HTML parsing to extract links, HTTP requests to fetch essay content, and AI-based summarization using GPT-4o. It's an excellent example of an end-to-end automated task that is not only efficient but also provides real value by summarizing valuable content. Note that to use this template, you need to be on n8n version 1.50.0 or later.
n8n-team
n8n Team
HTTP Request node
Markdown node
+5

AI agent that can scrape webpages

⚙️🛠️🚀🤖🦾 This template is a PoC of a ReAct AI Agent capable of fetching random pages (not only Wikipedia or Google search results). On the top part there's a manual chat node connected to a LangChain ReAct Agent. The agent has access to a workflow tool for getting page content. The page content extraction starts with converting query parameters into a JSON object. There are 3 pre-defined parameters: url** – an address of the page to fetch method** = full / simplified maxlimit** - maximum length for the final page. For longer pages an error message is returned back to the agent Page content fetching is a multistep process: An HTTP Request mode tries to get the page content. If the page content was successfuly retrieved, a series of post-processing begin: Extract HTML BODY; content Remove all unnecessary tags to recude the page size Further eliminate external URLs and IMG scr values (based on the method query parameter) Remaining HTML is converted to Markdown, thus recuding the page lengh even more while preserving the basic page structure The remaining content is sent back to an Agent if it's not too long (maxlimit = 70000 by default, see CONFIG node). NB: You can isolate the HTTP Request part into a separate workflow. Check the Workflow Tool description, it guides the agent to provide a query string with several parameters instead of a JSON object. Please reach out to Eduard is you need further assistance with you n8n workflows and automations! Note that to use this template, you need to be on n8n version 1.19.4 or later.
eduard
Eduard
Merge node
Telegram node
Telegram Trigger node
+2

Telegram AI Chatbot

The workflow starts by listening for messages from Telegram users. The message is then processed, and based on its content, different actions are taken. If it's a regular chat message, the workflow generates a response using the OpenAI API and sends it back to the user. If it's a command to create an image, the workflow generates an image using the OpenAI API and sends the image to the user. If the command is unsupported, an error message is sent. Throughout the workflow, there are additional nodes for displaying notes and simulating typing actions.
eduard
Eduard
HTTP Request node
WhatsApp Business Cloud node
+10

Building Your First WhatsApp Chatbot

This n8n template builds a simple WhatsApp chabot acting as a Sales Agent. The Agent is backed by a product catalog vector store to better answer user's questions. This template is intended to help introduce n8n users interested in building with WhatsApp. How it works This template is in 2 parts: creating the product catalog vector store and building the WhatsApp AI chatbot. A product brochure is imported via HTTP request node and its text contents extracted. The text contents are then uploaded to the in-memory vector store to build a knowledgebase for the chatbot. A WhatsApp trigger is used to capture messages from customers where non-text messages are filtered out. The customer's message is sent to the AI Agent which queries the product catalogue using the vector store tool. The Agent's response is sent back to the user via the WhatsApp node. How to use Once you've setup and configured your WhatsApp account and credentials First, populate the vector store by clicking the "Test Workflow" button. Next, activate the workflow to enable the WhatsApp chatbot. Message your designated WhatsApp number and you should receive a message from the AI sales agent. Tweak datasource and behaviour as required. Requirements WhatsApp Business Account OpenAI for LLM Customising this workflow Upgrade the vector store to Qdrant for persistance and production use-cases. Handle different WhatsApp message types for a more rich and engaging experience for customers.
jimleuk
Jimleuk
Google Drive node
Binary Input Loader node
Embeddings OpenAI node
OpenAI Chat Model node
+5

Ask questions about a PDF using AI

The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
davidn8n
David Roberts

Implement complex processes faster with n8n

red icon yellow icon red icon yellow icon