Why am I getting an unusual reply from the Brainstorm feature?

Modified on Tue, 9 Apr at 6:42 PM

Our LLM derives some of its generic intelligence from a bunch of open source language models and conversational datasets. Its architecture is loosely based on a combination of various deep learning transformer models such as GPT, GPT2, RoBERTa  among others and hosted purely on our servers. Since our LLM is also trained on vast open source conversation data pairs available in the public domain, it can pick up a conversation someone had with ChatGPT present in the dataset, inducing "hallucination". If you come across questionable results from Brainstorm, do bring it to our notice at [email protected].

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article