Anthropic launches AI chatbot Claude to rival ChatGPT, claims 'Constitutional AI' method sets it apart
Anthropic, a start-up established by former OpenAI team members, has introduced an AI chatbot named Claude, presenting it as a competitor to the widely popular ChatGPT. Similar to its OpenAI counterpart, Claude is capable of executing various tasks, including document searches, summarization, writing, coding, and responding to inquiries on specific subjects. According to Anthropic, Claude possesses several advantages, such as reduced likelihood of generating harmful outputs, enhanced user-friendly conversational skills, and being easier to steer in a particular direction.
Anthropic has yet to release pricing details but invites organizations to request access to Claude. An Anthropic representative stated that they are confident in their infrastructure for model serving and that they anticipate meeting customer demand. Claude has been in quiet beta testing with launch partners, including AssemblyAI, DuckDuckGo, Notion, Quora, and Robin AI, since late 2020. Anthropic is offering two versions of the chatbot, Claude and a faster, more affordable derivative, Claude Instant, via an API.
Anthropic’s AI chatbot, Claude, is partnering with major companies such as DuckDuckGo, Quora, and Notion to provide innovative AI-driven services. The new tool can perform various tasks, including searching through documents, summarizing, writing and coding, and answering questions on specific topics. Along with ChatGPT, Claude powers DuckDuckGo's recently launched DuckAssist tool, Quora's AI chat app, Poe, and Notion AI, an AI writing assistant integrated with the Notion workspace. According to Robin AI's CEO Richard Robinson, Claude is adept at understanding language, including technical language like legal terms, and excels in drafting, summarizing, translating, and simplifying complex concepts.
Related: Get past ChatGPT’s ‘At capacity’ error message
While it remains to be seen how Claude fares in the long term, Anthropic asserts that its chatbot is less prone to generating harmful outputs. The company has explained that it uses a human-centered approach to language modeling and builds models based on the 'deep structure' of language, as opposed to generating text based solely on patterns and associations. This approach, along with Anthropic’s emphasis on controllability, steers Claude away from generating the kind of toxic or biased language that has plagued other chatbots in the past. Additionally, Claude is designed to defer when asked about topics outside its knowledge areas, reducing the risk of generating false information.
Anthropic has claimed that Claude, trained on public webpages up to spring 2021, is less prone to producing sexist, racist, and toxic language and can avoid assisting in illegal or unethical activities. However, what sets Claude apart, according to Anthropic, is its use of a technique called 'constitutional AI.'
The idea of 'Constitutional AI' aims to align AI systems with human intentions through a principle-based approach. This allows AI, including ChatGPT, to respond to questions using a set of guiding principles. Anthropic created Claude by using a list of about ten principles that, when combined, formed a type of constitution for the AI chatbot. Although the principles have not been made public, Anthropic states that they are grounded in the concepts of beneficence, nonmaleficence, and autonomy.
Anthropic employed a separate AI system to use the aforementioned principles for self-improvement, generating responses to an array of prompts while adhering to the constitution. After exploring possible responses to thousands of prompts, the AI curated the most constitutionally consistent ones, which Anthropic distilled into a single model that Claude was then trained on. While Anthropic claims that Claude offers benefits such as reduced toxic outputs, increased controllability, and easier conversation, the startup acknowledges limitations that surfaced during the closed beta. According to reports, Claude struggles with math, programming, and hallucinates on occasion, providing inaccurate information, such as instructions for producing harmful substances.
Despite its emphasis on safety and responsible AI, Claude is not immune to limitations and risks. Clever prompting can bypass the built-in safety features, which is also an issue with ChatGPT. In the closed beta, one user was able to get Claude to provide instructions for making meth at home. According to an Anthropic spokesperson, striking the right balance between usefulness and safety is a challenge, as AI models can sometimes opt for silence to avoid any chance of hallucinating or saying something untrue. Though Anthropic has made progress in reducing the occurrence of such issues, there is still work to be done to improve Claude's performance.
Anthropic has plans to allow developers to personalize Claude's constitutional principles to suit their individual needs. The company is also focused on customer acquisition, with a particular emphasis on 'startups making bold technological bets' and larger enterprises. 'We're not pursuing a broad direct-to-consumer approach at this time,' an Anthropic spokesperson stated.
'We think this more narrow focus will help us deliver a superior, targeted product.' Anthropic is under pressure from investors to recoup the hundreds of millions of dollars that have been invested in its AI technology. The company has received substantial external support, including a $580 million tranche from a group of investors that includes Caroline Ellison, Sam Bankman-Fried, Center for Emerging Risk Research, Jim McClave, Nishad Singh, and Jaan Tallinn.
Anthropic received a recent investment from Google, with the tech giant committing $300 million for a 10% stake in the startup. The deal, first reported by the Financial Times, included an agreement for Anthropic to use Google Cloud as its 'preferred cloud provider.' The two companies will also collaborate on the development of AI computing systems. With this significant investment, Anthropic is poised to expand its reach and continue to develop its AI technologies.
Advertisement
Oh no… This is the new fad now, next thing you buy some toilet paper that comes with chatbot algorithm called “AI” to give jaw drop to the stupids.