Welcome
to
ReportDesk GPT™

Your trusted partner in healthcare data analysis and decision support. Our platform, powered by a revolutionary artificial intelligence model, combines the most advanced natural language processing capabilities with a knowledge base specifically tailored to the healthcare industry

Our mission

Our mission is to transform how healthcare leaders and professionals gain insights, make decisions, and communicate. We empower you with a tool that provides data analysis, regulatory compliance guidance, real time market insights all in one place

Our Services

Whether you're a healthcare executive looking to streamline operations, a medical researcher in need of literature and data analysis, or a physician seeking the latest information in your field, ReportDesk GPT™ is here to assist.

Featured

Harness the power of AI in your healthcare journey. Explore our website to learn more about the possibilities that ReportDesk GPT™ can unlock for you.

About ReportDesk GPT™

ReportDesk GPT™ is a sibling model to ReportDesk, which is trained to follow an instruction in a prompt and provide a detailed response related to healthcare related topics. We are excited to introduce ReportDesk GPT™ to get users’ feedback and learn about its strengths and weaknesses. Right now we are in the beta version, usage of ReportDesk GPT™ is free. Try it now at chat.ReportDesk.com.



Sample

In the following sample, ReportDesk GPT™ asks the clarifying questions to debug code.





Methods

We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format.

To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.



Limitations

  • Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • ReportDesk is a beta version of the product and provides limited answers as per the training model.
  • ReportDesk GPT™ is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.

Get in touch

We'd love to hear from you

A name is required.
An email is required.
Email is not valid.
A phone number is required.
A message is required.
Form submission successful!
To activate this form, sign up at
https://www.thereportdesk.com/rdgpt
Error sending message!