• Share this News :        


  • February 17, 2024
  • Anaranniya N
Nvidia Introduces 'Chat with RTX' AI Chatbot: Exploring the Fusion of GenAI Software and Hardware

Nvidia has introduced a groundbreaking AI application named "Chat with RTX," bringing advanced chatbot capabilities directly to users' personal computers. This innovative software utilizes Nvidia's powerful GeForce RTX GPUs to conduct real-time natural language processing without the need for cloud connectivity.Upon installation, Chat with RTX sets up a local Python server and web interface to manage user queries. Users can input various files, including YouTube video URLs and personal documents, for the chatbot to analyze. The application can search transcripts for keywords, summarize videos or texts, and perform other functions.

According to the press release, "Chat with RTX employs retrieval-augmented generation (RAG), Nvidia TensorRT-LLM software, and Nvidia RTX acceleration to provide generative AI capabilities to local, GeForce-powered Windows PCs." This integration allows users to connect local files on their PC to open-source large language models like Mistral or Llama 2, facilitating queries for quick and contextually relevant answers.The application leverages the tensor cores on GeForce RTX 30 and 40 series graphics cards for accelerated matrix math operations essential for neural network-based AI. This specialized hardware ensures responsive performance compared to cloud-based APIs. Furthermore, local processing enhances privacy, as users' data remains on their PC.

The initial download size is substantial at 40GB due to the inclusion of AI model files, and the Python server consumes around 3GB of RAM when active. Additionally, response times may vary depending on the GPU model being used.In testing, Chat with RTX demonstrated proficiency in handling various file formats, including .txt, .pdf, .doc, and others. It can even analyze YouTube captions to identify search terms or generate video summaries, potentially enhancing research and analysis workflows. Despite its capabilities, the tool is still in its early stages, lacking context or memory between questions and exhibiting some quirks such as saving JSON files after indexing folders, which could clutter storage. Nonetheless, Chat with RTX offers a promising glimpse into the future of AI-assisted computing, suggesting that local chatbots like this could become a common method of enhancing personal devices as the software evolves.