OpenAI Shuts Down Chatbot Project To Prevent ‘Possible Misuse’

OpenAI told the developer he was no longer allowed to use its tech after he refused to insert a monitoring tool.

Jason Rohrer, an artificial intelligence (AI) researcher and game designer, had created a chatbot using OpenAI’s text-generating language model GPT-3 for fun during the pandemic last year. Rohrer named the chatbot “Samantha” and programmed her to be very friendly, acutely warm, and immensely curious. He allowed others to customise his creation — which he named Project December — to build their own chatbots as they desired. One man turned it into a close proxy of his dead fiancee. Soon, OpenAI learned about the project and gave Rohrer the option to either dilute the project to prevent possible misuse or shut it down. Rohrer was also asked to insert an automated monitoring tool, which he refused.

Only ‘natural persons’ can be recognized as patent inventors, not AI systems, US judge rules

This isn’t over says man pushing for neural networks’ rights

AI systems cannot be granted patents and will not be recognised as inventors in the eyes of the US law, said a federal judge who decided to uphold a previous ruling by the US Patent and Trademark Office this week.

Stephen Thaler, founder of Imagination Engines, a company in Missouri, applied in 2019 for two US patents describing a food container based on fractal geometry and an emergency light beacon. Instead of putting his own name on the applications, however, Thaler gave all the credit to DABUS, a neural network he built and claimed came up with both creations.

The US Patent and Trademark Office, however, rejected both applications and said only “natural persons” are allowed to be named as an inventor on the patent paperwork. Thaler in response sued Andrei Iancu, who was the director of the patent office at the time, in federal court in eastern Virginia to challenge that decision.

「どんな文章も3行に要約するAI」デモサイト、東大松尾研発ベンチャーが公開 「正確性は人間に匹敵」

東京大学・松尾豊研究室発のAIベンチャーELYZA(イライザ/東京都文京区)は8月26日、文章の要約文を生成するAI「ELYZA DIGEST」を試せるデモサイトを公開した。人間より短時間で要約でき、要約の正確性は「人間に匹敵する」という。今後も精度を高め、議事録作りやコールセンターでの対話メモ作成などでの活用を目指す。

同社は自然言語処理技術(NLP)の研究を進めており、日本語テキストデータの学習量・モデルの大きさともに日本最大級というAIエンジン「ELYZA Brain」を開発している。

ELYZA DIGESTは、大規模言語モデルを基に、要約というタスクに特化したAIとして開発。読み込んだテキストを基に、AIが一から要約文を生成する「生成型」モデルで、文の一部を抜き出す「抽出型」モデルなどと異なり、文の構造が崩れていたり、話者が多数いる会話文だったりしても、精度の高い要約文を生成できるという。

Tesla Bot Takes Tech Demos to Their Logical Conclusion

Elon Musk’s bizarre demo exposed the truth of many tech reveals: They are a storyboarded vision of the future held together by digital duct tape.

THE ROBOT WAS not at all real. Or it was very real, depending on whether you believe realness is closely related to physiology or whether you think this whole reality is a simulation. Which is to say, the robot was actually a human cosplaying as a humanoid robot.

The robot shuffled on stage during Tesla’s AI Day yesterday afternoon, a three-hour demo of autonomous car features and slides titled “Multi-Scale Feature Pyramid Fusion.” The big news out of the event was a new custom AI chip for data centers, and a supercomputing system called Dojo. Later in the livestream, Tesla founder and chief executive officer Elon Musk revealed that Tesla was working on this robot. People tuned in, because Musk. Then they laughed, because of the robot. But the joke was on them.

人間の言葉からプログラムを自動的に書く「OpenAI Codex」を触って考えたこと

OpenAIが先日発表した新技術「OpenAI Codex」は、一言で言えば、「プログラムを自動的に書くAI」である。

「自動的に書く」と言っても、何もないところから書くわけではなく、人間が「画面に猫を出せ」などと指示を与えると、その通りに書くというAIになる。

このOpenAI Codexを使っている様は、なかなか面白い。

「画面に宇宙船を出せ」とか、「そいつを丸く切り取れ」とか「もっと小さくしろ」とか矢継ぎ早に指示を出していくと、プログラムがその通りに生成される。なるほど凄い。まるで魔法である。

しかし我々は常に眉に唾をつけて考えなくてはならない。これは果たしてどこまで本当のことなのか、どこまでが誇張されたショーで、どこからが本当なのか …