Our LLM derives some of its generic intelligence from a bunch of open source language models and conversational datasets. Its architecture is loosely based on a combination of various deep learning transformer models such as GPT, GPT2, RoBERTa among others and hosted purely on our servers. Since our LLM is also trained on vast open source conversation data pairs available in the public domain, it can pick up a conversation someone had with ChatGPT present in the dataset, inducing "hallucination". If you come across questionable results from Brainstorm, do bring it to our notice at hello@paperpal.com.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article