ChatGPT passes Wharton MBA exam and US Medical exam
In the tech world, ChatGPT and its many applications continue to be fascinating. Education is one application where ChatGPT is viewed as a game-changer, and it appears that the chatbot can perform well when it comes to medical education. One study claims that ChatGPT was successful in passing the US Medical Licensing Examination (USMLE), which is ordinarily taken by medical students who want to become licenced physicians.
In fact, as researchers demonstrated in one publication,
ChatGPT not only aced the exam—which consists of three steps for different
levels of medical professionals—but also provided explanations and insights
into how it arrived at its conclusions.The initial study, which was released in
December and is accessible on medRxivopens, demonstrated that ChatGPT was able
to achieve greater than 50% accuracy across all tests. The paper hasn't been
peer reviewed yet, though.
Professor Christian Terwiesch, who authored the research paper "Would Chat GPT3 Get a Wharton MBA?" said :
"ChatGPT has remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants."
Professor Another intriguing instance involves ChatGPT,
which was able to pass an MBA exam created by a University of Pennsylvania
professor of Wharton. The ChatGPT-3-powered chatbot passed the MBA
course's final test with a grade between a B- and a B.
What is ChatGPT?
OpenAI created ChatGPT, a big language generation model. It can produce human-like responses to various prompts after being trained on a wide variety of internet text. It can be fine-tuned to carry out particular duties like text completion, question answering, and language translation. It employs the transformer architecture, a deep learning model designed to handle sequential input, including linguistic data.
Some interesting
features of ChatGPT include:
- A
dataset of more than 40GB of text, including books, papers, and websites,
was used to train the first iteration of ChatGPT.
- ChatGPT
is one of the largest language models currently accessible, with 175
billion parameters.
- On
a powerful GPU, ChatGPT's fine-tuning normally requires a few hours.
- About
40 words can be produced via ChatGPT every second.
- The
model's perplexity, which is a gauge of how well it models the training
data, is in the range of 20, which is regarded as excellent.
- Numerous
applications, including chatbots, question-and-answer systems, and
language translation, have made use of ChatGPT.
- The
model is capable of producing text that is difficult to differentiate from
human-written material.
Post a Comment