Anthropic launches AI chatbot Claude to rival ChatGPT, claims 'Constitutional AI' method sets it apart

Anthropic, a start-up established by former OpenAI team members, has introduced an AI chatbot named Claude, presenting it as a competitor to the widely popular ChatGPT. Similar to its OpenAI counterpart, Claude is capable of executing various tasks, including document searches, summarization, writing, coding, and responding to inquiries on specific subjects. According to Anthropic, Claude possesses several advantages, such as reduced likelihood of generating harmful outputs, enhanced user-friendly conversational skills, and being easier to steer in a particular direction.
Anthropic has yet to release pricing details but invites organizations to request access to Claude. An Anthropic representative stated that they are confident in their infrastructure for model serving and that they anticipate meeting customer demand. Claude has been in quiet beta testing with launch partners, including AssemblyAI, DuckDuckGo, Notion, Quora, and Robin AI, since late 2020. Anthropic is offering two versions of the chatbot, Claude and a faster, more affordable derivative, Claude Instant, via an API.
Anthropic’s AI chatbot, Claude, is partnering with major companies such as DuckDuckGo, Quora, and Notion to provide innovative AI-driven services. The new tool can perform various tasks, including searching through documents, summarizing, writing and coding, and answering questions on specific topics. Along with ChatGPT, Claude powers DuckDuckGo's recently launched DuckAssist tool, Quora's AI chat app, Poe, and Notion AI, an AI writing assistant integrated with the Notion workspace. According to Robin AI's CEO Richard Robinson, Claude is adept at understanding language, including technical language like legal terms, and excels in drafting, summarizing, translating, and simplifying complex concepts.
Related: Get past ChatGPT’s ‘At capacity’ error message
While it remains to be seen how Claude fares in the long term, Anthropic asserts that its chatbot is less prone to generating harmful outputs. The company has explained that it uses a human-centered approach to language modeling and builds models based on the 'deep structure' of language, as opposed to generating text based solely on patterns and associations. This approach, along with Anthropic’s emphasis on controllability, steers Claude away from generating the kind of toxic or biased language that has plagued other chatbots in the past. Additionally, Claude is designed to defer when asked about topics outside its knowledge areas, reducing the risk of generating false information.
Anthropic has claimed that Claude, trained on public webpages up to spring 2021, is less prone to producing sexist, racist, and toxic language and can avoid assisting in illegal or unethical activities. However, what sets Claude apart, according to Anthropic, is its use of a technique called 'constitutional AI.'
The idea of 'Constitutional AI' aims to align AI systems with human intentions through a principle-based approach. This allows AI, including ChatGPT, to respond to questions using a set of guiding principles. Anthropic created Claude by using a list of about ten principles that, when combined, formed a type of constitution for the AI chatbot. Although the principles have not been made public, Anthropic states that they are grounded in the concepts of beneficence, nonmaleficence, and autonomy.
Anthropic employed a separate AI system to use the aforementioned principles for self-improvement, generating responses to an array of prompts while adhering to the constitution. After exploring possible responses to thousands of prompts, the AI curated the most constitutionally consistent ones, which Anthropic distilled into a single model that Claude was then trained on. While Anthropic claims that Claude offers benefits such as reduced toxic outputs, increased controllability, and easier conversation, the startup acknowledges limitations that surfaced during the closed beta. According to reports, Claude struggles with math, programming, and hallucinates on occasion, providing inaccurate information, such as instructions for producing harmful substances.
Despite its emphasis on safety and responsible AI, Claude is not immune to limitations and risks. Clever prompting can bypass the built-in safety features, which is also an issue with ChatGPT. In the closed beta, one user was able to get Claude to provide instructions for making meth at home. According to an Anthropic spokesperson, striking the right balance between usefulness and safety is a challenge, as AI models can sometimes opt for silence to avoid any chance of hallucinating or saying something untrue. Though Anthropic has made progress in reducing the occurrence of such issues, there is still work to be done to improve Claude's performance.
Anthropic has plans to allow developers to personalize Claude's constitutional principles to suit their individual needs. The company is also focused on customer acquisition, with a particular emphasis on 'startups making bold technological bets' and larger enterprises. 'We're not pursuing a broad direct-to-consumer approach at this time,' an Anthropic spokesperson stated.
'We think this more narrow focus will help us deliver a superior, targeted product.' Anthropic is under pressure from investors to recoup the hundreds of millions of dollars that have been invested in its AI technology. The company has received substantial external support, including a $580 million tranche from a group of investors that includes Caroline Ellison, Sam Bankman-Fried, Center for Emerging Risk Research, Jim McClave, Nishad Singh, and Jaan Tallinn.
Anthropic received a recent investment from Google, with the tech giant committing $300 million for a 10% stake in the startup. The deal, first reported by the Financial Times, included an agreement for Anthropic to use Google Cloud as its 'preferred cloud provider.' The two companies will also collaborate on the development of AI computing systems. With this significant investment, Anthropic is poised to expand its reach and continue to develop its AI technologies.
Advertisement
Uhh, this has already been possible – I am not sure how but remember my brother telling me about it. I’m not a whatsapp user so not sure of the specifics, but something about sending the image as a file and somehow bypassing the default compression settings that are applied to inbound photos.
He has also used this to share movies to whatsapp groups, and files 1Gb+.
Like I said, I never used whatsapp, but I know 100% this isn’t a “brand new feature”, my brother literally showed me him doing it, like… 5 months ago?
Martin, what happened to those: 12 Comments (https://www.ghacks.net/chatgpt-gets-schooled-by-princeton-university/#comments). Is there a specific justifiable reason why they were deleted?
Hmm, it looks like the gHacks website database is faulty, and not populating threads with their relevant cosponsoring posts.
The page on ghacks this is on represents the best of why it has become so worthless, fill of click-bait junk that it’s about to be deleted from my ‘daily reads’.
It’s really like “Press Release as re-written by some d*ck for clicks…poorly.” And the subjects are laughable. Can’t wait for “How to search for files on Windows”.
> The page on ghacks this is on represents the best of why it has become so worthless, fill of click-bait junk…
Sadly, I have to agree.
Only Martin and Ashwin are worth subscribing to.
Especially Emre Çitak and Shaun are the worst ones.
If ghacks.net intended “Clickbait”, it would mark the end of Ghacks Technology News.
Ghacks doesn’t need crappy clickbaits. Clearly separate articles from newer authors (perhaps AIs and external sales person or external advertising man) as just “Advertisements”!
We, the subscribers of Ghacks, urge Martin to make a decision.
because nevermore wants to “monetize” on every aspect of human life…
“Threads” is like the Walmart of Social Media.
How hard can it be to clone a twitter version of that as well? They’re slow.
Yes, why not mention how large the HD files can be?
Why, not mention what version of WhatsApp is needed?
These omissions make the article feel so bare. If not complete.
Sorry posted on the wrong page.
such a long article for such a simple matter. Worthless article ! waste of time
I already do this by attaching them via the ‘Document’ option.
I don’t know what’s going on here at Ghacks but it’s obvious that something is broken, comments are being mixed whatever the article, I am unable to find some of my later posts neither. :S
Quoting the article,
“As users gain popularity, the value of their tokens may increase, allowing investors to reap rewards.”
Besides, beyond the thrill and privacy risks or not, the point is to know how you gain popularity, be it on social sites as everywhere in life. Is it by being authentic, by remaining faithful to ourselves or is it to have this particular skill which is to understand what a majority likes, just like politicians, those who’d deny to the maximum extent compatible with their ideological partnership, in order to grab as many of the voters they can?
I see the very concept of this Friend.tech as unhealthy, propagating what is already an increasing flaw : the quest for fame. I won’t be the only one to count himself out, definitely.
@John G. is right : my comment was posted on [https://www.ghacks.net/2023/08/23/what-is-friend-tech/] and it appears there but as well here at [https://www.ghacks.net/2023/07/08/how-to-follow-everyone-on-threads/]
This has been lasting for several days. Fix it or at least provide some explanations if you don’t mind.
> Google Chrome is following in Safari’s footsteps by introducing a new feature that allows users to move the Chrome address bar to the bottom of the screen, enhancing user accessibility and interaction.
Firefox did this long before Safari.
Basically they’ll do anything except fair royalties.