Today, all-in-one observability provider New Relic announced the launch of Grok – a generative AI assistant to help engineering teams monitor, debug, secure and improve their software stacks using natural language prompts.
Grok comes embedded in the entire New Relic platform, which includes over 30 correlated monitoring services. Using a simple chat interface it can be triggered to keep an eye on and fix software issues, among other things.
For example, it can save engineers from the tedious task of manually sifting through telemetry data, the company noted.
How exactly does the Grok generative AI assistant help?
Observability is critical to running digital businesses and making sure software applications continue to deliver expected performance and results. But the current approach to observability largely relies on engineering teams sifting through mountains of siloed telemetry data. This takes time and effort. Plus, a lot of engineers are not familiar with the complex systems and hard-to-use troubleshooting interfaces many observability platforms provide.
With Grok, which uses OpenAI’s large language models (LLMs), New Relic is looking to address this gap. The new solution, as the company explains, provides enterprises with a simple chat interface. Engineers, regardless of experience, can type in their queries in natural language and get answers to help them isolate and fix the issue at hand.
“Users can simply ask for root causes. Any question on their mind is fair game, however complex, such as ‘Why is my shopping cart service slow?’ or ‘How did the latest server update impact my app?’ New Relic Grok can analyze your telemetry and context (including recent changes introduced) across your entire software stack to suggest underlying causes and resolution steps,” Manav Khurana, chief product officer at New Relic, told VentureBeat.
“Before New Relic Grok, a user would have had to either know exactly where to find some insight or data in New Relic’s platform, or think of how to phrase their question as a custom query, and then iterate to resolve errors in the query and make it return what they need. With New Relic Grok, they can just ask the question and see an answer,” Khurana explained.
Along with isolating and fixing issues, the generative AI assistant can help with other aspects of using New Relic’s observability platform — setting up instrumentation and monitoring, building reports and dashboards, debugging code-level errors, managing accounts. Overall, it enables new teams to adopt observability in their workflow.
“The combination of New Relic’s unified database and OpenAI’s large language models (LLMs) allows users to extend this kind of plain-language questioning into new territory,” Khurana said. “New Relic Grok can actually take questions, translate them into queries, run those queries and then return results as charts, tables, forms, reports and more.”
Notably, a user who receives an answer that’s not as detailed as expected can iterate on the initial question by simply asking follow-up questions in plain language.
Wherever I go, I see LLMs
This move from New Relic marks the latest effort from an enterprise technology vendor to integrate large language models into its product. In recent weeks, we have seen Salesforce integrating Einstein GPT with its Flow automation suite, Microsoft’s Copilot proliferation, conversational querying from Kinetica, and a generative AI-powered manager from Pathlight for giving teams feedback on their performance. Many business intelligence solutions, including Domo, ThoughtSpot and SiSense, have also started offering generative AI capabilities.
For New Relic, Khurana said, the focus is on creating value by pairing the latest and greatest in multimodal LLM models with the company’s own APIs and product capabilities.
“That means leveraging OpenAI’s GPT-4 for its NLP prowess, establishing a feedback loop with our own APIs, such as NerdGraph, and enriching the experience with additional context from vector DBs,” he said. “We’re also experimenting with training our own models, which will be especially effective at addressing our customers’ most pressing problems in the observability space.”