Monday, March 25, 2024

LLM Structured Output for Function Calling with Ollama

I explain how function calling works with LLM. This is often confused concept, LLM doesn't call a function - LLM retuns JSON response with values to be used for function call from your environment. In this example I'm using Sparrow agent, to call a function. 

 

Sunday, March 17, 2024

FastAPI File Upload and Temporary Directory for Stateless API

I explain how to handle file upload with FastAPI and how to process the file by using Python temporary directory. Files placed into temporary directory are automatically removed once request completes, this is very convenient for stateless API. 

 

Sunday, March 10, 2024

Optimizing Receipt Processing with LlamaIndex and PaddleOCR

LlamaIndex Text Completion function allows to execute LLM request combining custom data and the question, without using Vector DB. This is very useful when processing output from OCR, it simplifies the RAG pipeline. In this video I explain, how OCR can be combined with LLM to process image documents in Sparrow.

 

Sunday, March 3, 2024

LlamaIndex Multimodal with Ollama [Local LLM]

I describe how to run LlamaIndex Multimodal with local LlaVA LLM through Ollama. Advantage of this approach - you can process image documents with LLM directly, without running through OCR, this should lead to better results. This functionality is integrated as separate LLM agent into Sparrow. 

 

Monday, February 26, 2024

LLM Agents with Sparrow

I explain new functionality in Sparrow - LLM agents support. This means you can implement independently running agents, and invoke them from CLI or API. This makes it easier to run various LLM related processing within Sparrow. 

 

Tuesday, February 20, 2024

Extracting Invoice Structured Output with Haystack and Ollama Local LLM

I implemented Sparrow agent with Haystack structured output functionality to extract invoice data. This runs locally through Ollama, using LLM to retrieve key/value pairs data. 

 

Sunday, February 4, 2024

Local LLM RAG Pipelines with Sparrow Plugins [Python Interface]

There are many tools and frameworks around LLM, evolving and improving daily. I added plugin support in Sparrow to run different pipelines through the same Sparrow interface. Each pipeline can be implemented with different tech (LlamaIndex, Haystack, etc.) and run independently. The main advantage is that you can test various RAG functionalities from a single app with a unified API and choose the one that works best in the specific use case.