Virtual Assistant

Assembly Line

Unleashing the Potential of Large Language Models in Robotics: RoboDK’s Virtual Assistant

📅 Date:

🔖 Topics: Generative AI, Large Language Model, Virtual Assistant

🏢 Organizations: RoboDK


The RoboDK Virtual Assistant is the first step towards a comprehensive generalized assistant for RoboDK. At its core is OpenAI’s GPT3.5-turbo-0613 model. The model is provided with additional context about RoboDK in the form of an indexed database containing the RoboDK website, documentation, forum threads, blog posts, and more. The indexing process is done with LlamaIndex, a specialized data framework designed for this purpose. Thanks to this integration, the Virtual Assistant can swiftly provide valuable technical support to over 75% of user queries on the RoboDK forum, reducing the time spent searching through the website and documentation via manual methods. Users can expect to have an answer to their question in 5 seconds or less.

Read more at RoboDK Blog

Silicon Volley: Designers Tap Generative AI for a Chip Assist

📅 Date:

✍️ Author: Rick Merritt

🔖 Topics: Generative AI, Large Language Model, Computer-aided Design, Chip Design, Virtual Assistant

🏭 Vertical: Semiconductor

🏢 Organizations: NVIDIA


The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

The paper details how NVIDIA engineers created for their internal use a custom LLM, called ChipNeMo, trained on the company’s internal data to generate and optimize software and assist human designers. Long term, engineers hope to apply generative AI to each stage of chip design, potentially reaping significant gains in overall productivity, said Ren, whose career spans more than 20 years in EDA. After surveying NVIDIA engineers for possible use cases, the research team chose three to start: a chatbot, a code generator and an analysis tool.

On chip-design tasks, custom ChipNeMo models with as few as 13 billion parameters match or exceed performance of even much larger general-purpose LLMs like LLaMA2 with 70 billion parameters. In some use cases, ChipNeMo models were dramatically better.

Read more at NVIDIA Blog