OpenAI promises up to $20,000 if users find ChatGPT issues

top line

OpenAI is launching a so-called bug bounty program for users who find bugs and security issues in artificial intelligence products, including the highly developed but controversial chatbot ChatGPT, as artificial intelligence faces heightened scrutiny from government officials. Technology industry.

Basic information

San Francisco-based OpenAI announced in a blog post Tuesday that it will pay up to $6,500 per bug found through its bug bounty program, which it is launching with BugCrowd Inc.

Under the program, people will be rewarded for finding and reporting unique flaws in AI systems that lead the company to change its code, with rewards determined based on the “probability or impact” of the flaw and at OpenAI’s sole discretion.

The company will pay $200 for “low-intensity searches,” with a maximum possible total payment of $20,000.

Issues eligible for cash rewards include ChatGPT—which is in its research review phase—as well as login, plug-in, payment issues, and data exposure (users must keep those vulnerabilities private until they are approved by OpenAI to disclose them).

In the blog post, the company said the program is intended to increase “transparency and collaboration” and acknowledged: “While we work hard to prevent risk, we cannot predict how people will use or abuse our technology in the real world.”

Original background

OpenAI, which was founded in 2015, released ChatGPT to the public in November, sparking a wave of interest in AI software. Microsoft—already one of OpenAI’s backers—pledged to invest an additional $10 billion in the company earlier this year, and has begun integrating an OpenAI-powered chat service into its Bing search engine. Although ChatGPT has been used to write college-level essays and poems, write computer code, plan meals, and create budgets—often with human-like accuracy—it gives wrong answers to questions and contradicts itself. Since ChatGPT’s public release, users have attempted to push the product to its limits, including so-called “jailbreaks,” which attempt to circumvent built-in restrictions designed to prevent or circumvent harmful activity, such as engaging in hate speech. Giving details of how to commit the crime. Alex Albert, a computer science student at the University of Washington who created a website for jailbreak prompts, wrote a Twitter thread running through some of ChatGPT’s vulnerabilities last month, suggesting to OpenAI president Greg Brockman that he was “considering starting a bounty program. “

chief critic

Some experts warn that AI products could increase the risk of tricking people into believing seemingly legitimate misinformation to be true, and could eventually replace workers in the workplace and help students cheat on exams. Tech executives including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak have criticized the rise of AI, urging developers to immediately stop working on them so their risks can be thoroughly assessed. In an open letter with more than 1,000 signatures, technology leaders argued that developers are engaged in an “out-of-control race” to create more advanced and robust systems.

the tangent

OpenAI’s bug bounty program isn’t the first of its kind: other companies have paid people to discover bugs in their systems, including Amazon, AT&T, Bumble, BuzzFeed, Chime, Coinbase, and Google Chrome.

news peg

Biden administration officials are also weighing potential regulations of AI systems, including ChatGPT and Google’s BardAI, but have not yet suggested any specific rules. On Tuesday, the Commerce Department put out a public request for comments to help policymakers on how to introduce accountability measures.

Read more

Here’s what to know about OpenAI’s ChatGPT—what’s disrupting it and how to use it (Forbes)

The US government is seeking public input on how to regulate artificial intelligence (Forbes).

Here’s how to use AI—like ChatGPT and Bard—for everyday tasks like budgeting, finding airfare, or planning meals (Forbes)


Leave a Reply

Your email address will not be published. Required fields are marked *