Ai

  • Published on
    When storing data in memory, the data type used to represent the data has an impact on the memory usage and the performance of the overall system. Consider saving a number. On a high level, the number can either be an integer (whole number) or a floating-point number (number with decimal). Floating-point numbers can represent larger range of numbers with higher precision. Weights and biases in a large language model, which are learned during training and are used to make predictions, are stored as floating-point numbers to maintain high precision. The count of these parameters is what constitutes the size of the model, memory usage and how much computational resources are needed to run the model. In this post, we will discuss how quantization can be used to reduce the memory usage of models and improve performance (assuming the loss of precision is acceptable).
  • Published on
    Oracle among other companies announced recently that 50+ role-based AI agents within the Oracle Fusion Cloud Applications Suite will help successfully execute frequent, repetitive tasks. Other companies are doing the same. In this article I will discuss what AI agents are, what are some of the use cases and link some tools/frameworks that can help you design and build agents.
  • Published on
    RAG is one of the most common use cases that has been implemented in the past couple of years. Retrieval Augmented Generation (RAG) is a technique that enhances the capabilities of LLMs by combining them with external knowledge sources. It involves retrieving relevant information from a knowledge base, incorporating it into the LLM's context, and then generating a response that leverages both the LLM's internal knowledge and the retrieved information. Building RAG applications requires integrating various components like vector databases and search algorithms, which can be quite involved. In this blog we'll briefly talk about RAG basics and leveraging OpenAI's assistants to build simple RAG applications.
  • Published on
    This blog post will take you through the process of building a recommendation system and the concept of embeddings, vector databases and various use cases. These concepts are not only limited to recommendation systems but are widely used in various domains such as image recognition, natural language processing, semantic search, and anomaly detection. The ability to represent complex, high-dimensional data in a dense, lower-dimensional space is a fundamental technique in machine learning.