Monday, February 19, 2024

What is RAG? - Retrieval-Augmented Generation Explained

A RAG-based language model (RAG) is a machine learning technique used in natural language understanding tasks. RAG is an AI framework that improves the efficacy of large language models (LLMs) by using custom data. RAG combines information retrieval with generative AI to provide answers instead of document matches.

Unlike traditional lightweight language models, which use single representations for entire entities or phrases, RAGs can represent entities and phrases separately and in different ways.

The primary advantage of using RAG-based language models is their ability to handle long-term dependencies and hierarchical relationships between entities and phrases in natural language. This makes them more effective in tasks such as dialogue systems, question answering, and text summarization.

RAG allows the LLM to present accurate information with source attribution. The output can include citations or references to sources. Users can also look up source documents themselves if they require further clarification or more detail. This can increase trust and confidence in your generative AI solution.

RAG uses an external datastore to build a richer prompt for LLMs. This prompt includes a combination of context, history, and recent or relevant knowledge. RAG retrieves relevant data and documents for a question or task and provides them as context for the LLM.

RAG is the cheapest option to improve the accuracy of a GenAI application. This is because you can quickly update the instructions provided to the LLM with a few code changes.

Sunday, February 18, 2024

How To Return Remote Desktop View To Full Screen

At times while switching between users or computers, Remote desktop screen tend to set to one user profile desktop resolutions. This might be problem for new users who logged in after that.

To over come this issue and to fit to your screen resolutions, here are the simple steps to do on Windows machine.

  1. Just make sure you can see the hidden files on your Windows PC, I guess you know how to do that
  2. Close any Remote Desktop connection that is running.
  3. Go to your Documents (Start - Documents)
  4. Find this file, Default RDP (this file will be hidden format)
  5. Delete that file, and then start remote desktop connection now.
Screenshot 2023-09-14 224147

Hope this helps for people who will get annoyed with changing remote desktops screen resolutions with multiple user logins!!

Wednesday, February 14, 2024

Dapper vs Entity Framework Core vs ADO.NET

The comparison between Dapper, Entity Framework Core, and ADO.NET in the context of .NET database access reveals the following key points:

  1. ADO.NET:

    • It is a low-level technology, providing fine-grained control over database operations.
    • Widely used in .NET applications for a long time but requires writing a significant amount of code for database interaction.
    • Supports direct SQL queries for enhanced control over performance.
  2. Entity Framework Core:

    • High-level ORM tool built on ADO.NET, easing database interaction by abstracting operations.
    • Supports multiple database providers and offers features like automatic schema migration, query translation, and change tracking.
    • Supports LINQ for query writing in C# instead of SQL, enhancing ease of use.
  3. Dapper:

    • Micro ORM built for speed and efficiency, providing a lightweight and fast way to work with databases.
    • Built on top of ADO.NET, it offers a simple API for database operations, ideal for scenarios where performance is critical.
    • Allows flexibility for writing SQL queries and mapping results to any class or structure.

Key Comparisons:

  • Performance: Dapper is generally faster than ADO.NET and significantly quicker than Entity Framework Core due to its optimized design.
  • Ease of Use: EF Core provides a high-level API that abstracts database operations, making it easier to work with. Dapper requires writing SQL queries but is generally straightforward.
  • Features: EF Core offers a wide range of features, while Dapper provides speed and flexibility but lacks some high-level features.
  • Flexibility: Dapper is the most flexible, enabling direct SQL query writing and result mapping. EF Core and ADO.NET have limitations in terms of flexibility.

Choosing the right tool depends on project requirements:

  • Use Dapper for lightweight and fast database operations.
  • Employ EF Core for a high-level API and extensive features.
  • Opt for ADO.NET if fine-grained control over database operations is essential.

In conclusion, the choice of tool should align with the specific project needs, considering the trade-offs between performance, ease of use, features, and flexibility. Each tool offers pros and cons, and the decision should be based on the particular requirements of the application.

Monday, February 12, 2024

Learn Python for free!!!

is one of the easiest and most widely used programming languages. If you want to master Python, use these 5 FREE resources

1. Learn Basic concepts of Python
https://cs50.harvard.edu/python/2022/

2.  Learn Python basics for Data Analysis
https://t.co/0wPzZtaU25

3. Data Science with Python
https://t.co/dSRiUCKArm

4. Learn Django, a popular Python framework.
https://youtube.com/watch?v=rHux0gMZ3Eg

5. Learn Python and build 5 games with Free Code Camp's 6.5 hour tutorial.
https://youtube.com/watch?v=XGf2GcyHPhc

Happy Learning!!

Friday, February 09, 2024

[Solved] No module named MySQLdb

The error message "No module named 'MySQLdb'" typically indicates that Python cannot locate the MySQLdb module, which is a Python interface for accessing MySQL databases. This could be due to various reasons such as the module not being installed or the path to the installation directory not being correctly set. To fix this issue, you can either install the module using pip (the Python package installer) or set the path to the installation directory manually. 

To set the path to the MySQLdb installation directory in Python, you can follow these steps:

1. First, ensure that the MySQLdb module is installed in your Python environment. If not, you can install it using pip by running the following command in your terminal or command prompt:

pip install mysqlclient

2. Once the module is installed, you can check the installation path and set the path in Python using the following steps:

   - Open a Python environment or script.
   - At the top of your Python script or in the Python environment, you can set the path to the MySQLdb installation directory using the following code:

import sys
sys.path.append('/path/to/MySQLdb')

Replace "/path/to/MySQLdb" with the actual path to the MySQLdb installation directory on your system.

By setting the path in this way, you are enabling Python to locate the MySQLdb module when it is imported in your code. 

Hope this helps!!

Pre-Training vs Fine-tuning vs Context injection

Pre-Training:

Pre-training is a foundational step in the LLM training process, where the model gains a general understanding of language by exposure to vast amounts of text data.

  1. Foundational step in large language model (LLM) training process, where the model learns general language understanding from vast amounts of text data.
  2. Involves unsupervised learning and masked language modelling techniques, utilizing transformer architecture to capture relationships between words.
  3. Enables text generation, language translation, and sentiment analysis among other use cases.

Fine-Tuning:

Fine-tuning involves taking a pre-trained model and tweaking it for a specific task. This involves reconfiguring the model's architecture or changing its hyperparameters to improve its performance on a specific dataset.

  1. Follows pre-training and involves specializing the LLM for specific tasks or domains by training it on a smaller, specialized dataset.
  2. Utilizes transfer learning, task-specific data, and gradient-based optimization techniques.
  3. Enables text classification, question answering, and other task-specific applications.

In-Context Learning:

Context Learning involves injecting contextual information into a model during training, such as the option to choose from multiple models based on context. This can be useful in scenarios where the desired model is not available or cannot be learned from the data. 

  1. Involves guiding the model's behavior based on specific context provided within the interaction itself, without altering the model's parameters or training it on a specific dataset.
  2. Utilizes carefully designed prompts to guide the model's responses and offers more flexibility compared to fine-tuning.
  3. Enables dialogue systems and advanced text completion, providing more personalized responses in various applications.

Key Points:

  • Pre-training is the initial phase where LLMs gain general understanding of language from vast text data through unsupervised learning and masked language modelling.
  • Fine-tuning follows pre-training and focuses on making the LLM proficient in specific tasks or domains by training it on a smaller, specialized dataset using transfer learning and gradient-based optimization.
  • In-Context Learning involves guiding the model's responses based on specific context provided within the interaction itself using carefully designed prompts, offering more flexibility compared to fine-tuning.
  • Each approach has distinct characteristics, use cases, and implications for leveraging LLMs in various applications.

Monday, February 05, 2024

Must-Take AI Courses to Elevate Your Skills in 2024

Looking to delve deeper into the realm of Artificial Intelligence this year? Here's a curated list of courses ranging from beginner to advanced levels that will help you sharpen your AI skills and stay at the forefront of this dynamic field:

Beginner Level:

  1. Introduction to AI - IBM
  2. AI Introduction by Harvard
  3. Intro to Generative AI
  4. Prompt Engineering Intro
  5. Google's Ethical AI

Intermediate Level:

  1. Harvard Data Science & ML
  2. ML with Python - IBM
  3. Tensorflow Google Cloud
  4. Structuring ML Projects

Advanced Level:

  1. Prompt Engineering Pro
  2. Advanced ML - Google
  3. Advanced Algos - Stanford

Bonus:

Feel free to explore these courses and take your AI expertise to new heights. Don't forget to share this valuable resource with your network to spread the knowledge!

With these courses, you'll be equipped with the necessary skills and knowledge to tackle the challenges and opportunities in the ever-evolving field of AI. Whether you're a beginner or an advanced practitioner, there's something for everyone in this comprehensive list of AI courses. Happy learning!

Sunday, February 04, 2024

ChatGPT's new tagging feature

Introducing ChatGPT's latest tagging feature, designed to seamlessly integrate multiple GPT models into your prompts and enhance conversations with a variety of expertise.

With a simple "@" followed by selecting the desired GPT model, Mentions unlocks a world of possibilities. This seemingly minor update holds significant power, revolutionizing chats by allowing the utilization of multiple GPTs simultaneously, essentially forming a team of AI experts at your fingertips.

Microsoft Copilot Pro Overview

Microsoft has introduced Copilot Pro, a groundbreaking subscription service priced at $20 per month, aimed at revolutionizing interactions with Microsoft 365 applications.

Microsoft Copilot Pro is a software development tool that provides an intuitive and easy-to-use user interface for developing and debugging Windows applications. It includes features such as code completion, tooltips, and auto-suggestions that can help developers write code faster and more efficiently. Copilot Pro is designed to work with C++, C#, and other .NET framework languages. It supports both x86 and x64 architectures and is compatible with Windows 10. Copilot Pro can be purchased from the Microsoft Store and is included as part of the Visual Studio Professional edition.  For Pricing click here

This premium offering stands out in the market thanks to its cutting-edge AI capabilities:

Access to Advanced AI: Copilot Pro subscribers gain early access to advanced AI models like OpenAI's GPT-4 Turbo, ensuring swift performance even during peak usage times.

Seamless Integration with Microsoft Apps: The service seamlessly integrates with Microsoft 365 apps such as Word, Excel, and PowerPoint, available across various platforms including PC, Mac, and iPad.

AI-Powered Tools: Users can leverage AI assistance to generate documents, presentations, and emails, create AI images, and develop custom Copilot GPTs for personalized tasks.

Data Security: Microsoft Entra ID ensures chat data remains private, delivering a secure user experience without compromising AI training models.

Cross-Device Functionality: Copilot Pro offers a seamless AI experience across different devices, spanning web, PCs, and soon mobile phones.

Multilingual Support: While Excel's Copilot supports English exclusively, other apps offer multiple language options including Spanish, Japanese, and French.

Copilot Pro underscores Microsoft's dedication to integrating state-of-the-art AI into everyday work environments, offering an unmatched, secure, and smooth productivity experience across various platforms and languages.

Saturday, February 03, 2024

Characteristics of LLM Pre-Training

The characteristics of LLM pre-training include the following:

  1. Unsupervised Learning: LLM pre-training involves unsupervised learning, where the model learns from the vast amounts of text data without explicit human-labeled supervision. This allows the model to capture general patterns and structures in the language.

  2. Masked Language Modeling: During pre-training, the model learns to predict masked or hidden words within sentences, which helps it understand the context and relationships between words in a sentence or document.

  3. Transformer Architecture Utilization: LLMs typically utilize transformer architecture, which allows them to capture long-range dependencies and relationships between words in the input text, making them effective in understanding and generating human language.

  4. General Language Understanding: Pre-training enables the LLM to gain a broad and general understanding of language, which forms the foundation for performing various natural language processing tasks such as text generation, language translation, sentiment analysis, and more.

These characteristics contribute to the ability of LLMs to understand and generate human language effectively across a wide range of applications and domains.

Friday, February 02, 2024

Removing Cached login and password list in SQL Server Management Studio

You need to look in following location based on the SSMS Instance you have in your local PC.

Since mine is 19.0 version, Below is my path.

C:\Users\sconrey\AppData\Roaming\Microsoft\SQL Server Management Studio\19.0

Open UserSettings.xml in Notepad ++ or any editor of your choice.

Find the User you would like to remove and delete the Entire Element tag related to that User.

<ServerTypeItem>
    <Servers>   
        <Element>
 

        </Element>
    </Servers>
< /ServerTypeItem>

You need to remove complete Element tag from the file and save it. Please make sure during this process. SSMS should be closed, if not your changes will not eb updated.

Thursday, February 01, 2024

Improvements and enhancements in .NET 8

.NET 8 is the latest version of .NET framework that includes numerous improvements and enhancements over its predecessors. Some of the key enhancements in .NET 8 include:

1. ASP.NET Core 2.0 - ASP.NET Core 2.0 is a significant improvement, as it includes features like built-in support for HTTPS, improved performance, and better support for built-in authentication.
2. JSON support - .NET 8 provides better support for working with JSON data, making it easier to parse and serialize JSON in your applications.
3. C# language improvements - .NET 8 includes several language improvements, such as better type inference, improved garbage collection.

Here are some of the improvements

  • Native Ahead-of-Time (AOT) Compilation.
  • Code Generation Enhancements.
  • Garbage Collector Improvements.
  • JSON Enhancements.
  • Compression enhancements
  • Randomness Tools.
  • Cryptography Fortifications.
  • Silicon-Specific Features.
  • Time Abstraction.

About Google Gemini

Google has introduced Gemini, a groundbreaking artificial intelligence model that boasts superior capabilities in understanding, summarizing, reasoning, coding, and planning compared to other AI models.

The Gemini model is offered in three versions: Pro, Ultra, and Nano. The Pro version is already available, while the Ultra version is slated for release early next year.

Gemini has been seamlessly integrated with Google’s chatbot Bard, a direct competitor to ChatGPT. Users can now engage in text-based interactions with the Gemini-powered Bard.

Although currently limited to English, Google has assured users in 170 countries and territories, including India, that the new update is accessible. The capabilities of Gemini can be experienced through the Google Bard chatbot.

Gemini Nano is now available on Pixel 8 Pro, introducing enhanced features like summarization in the Recorder app and Smart Reply on Gboard.

Meanwhile, Gemini Pro can be accessed for free within Bard, offering users the opportunity to explore its advanced text-based capabilities.

Gemini Ultra achieved a remarkable 90.0% on the MMLU (massive multitask language understanding) test, encompassing subjects like math, physics, history, law, medicine, and ethics, assessing both knowledge and problem-solving capabilitie

Limitations of Google Gemini

While Gemini Pro integrated into Bard brings promising advancements, it’s crucial to be aware of certain limitations:

Language Limitation: Gemini Pro is currently available only in English, limiting its accessibility on a global scale.

Integration Constraints: Although Bard has embraced Gemini Pro, its integration within the chatbot is presently limited. Google is anticipated to enhance integration and refine the AI capabilities in the coming updates.

Geographical Constraints: Gemini Pro is not available in the European Union, imposing geographical limitations on its usage.

Text-Based Version Only: As of now, only the text-based version of Gemini Pro is accessible within Bard. Users seeking multimedia interactions may need to await future updates for a more diverse range of features