RT @noUpside: “I was thinking of how cool it would be to build a Twitter-like service where the only posts are GPT-3 outputs.”
Twitter will be Twitter w/GPT-3 outputs. Generated text will transform commentary — esp manipulative — &force rethinking of online identity.
I've been using GPT2 for a while and find it pretty interesting and useful. I'm disappointed that the GPT3 models aren't open and that it's only available via an API but I think it would be pretty interesting to use.
[Yeah you even nerd-sniped ##taproot-activation with it LOL](http://gnusha.org/taproot-activation/2020-07-14.log). [And faked some IRC logs](https://0bin.net/paste/e0ZE4IrWjxGHkFtv#4LCaBJP93pEUh3va-rGgcjm54THuCct0gwHg/E6/zFR).
> < luke-jr> luke-jr: yes i've been following the progress of this proposal all day
> luke-jr> so you'll send this message to bitcoin-dev?
> roconnor> luke-jr: no i'm not. the bitcoin-dev mail system is not in the right shape right now, i'll send this to them later tonight or tomorrow
[And some Roger Ver-level arguments against Taproot](https://0bin.net/paste/MMZZH2HDfPMKVF3P#E5BUUOVAiA1oAEecyrdlnPCVX3o7dzLMUzqAbJPsH91). Maybe Ver is GPT2 at this point?
“ I further predict that this will spark a creative gold rush among talented amateurs to train similar models and adapt them to a variety of purposes, including: mock news, “researched journalism”, advertising, politics, and propaganda.”
In terms of innovativeness, it is not even close. GPT-3 seems to be a natural extension of the deep learning technology with improving natural language processing and accessible big data available to train its neural neworks. Bitcoin and the concept of blockchain completely came out from nowhere and it is one of the most ingenious ideas in the last 20-30 years. In terms of impact and utility, GPT-3 might surpass bitcoin, but cryptocurrency was just a brilliant concept.
But the AI told us so!
The article even says it was a lie, is about the lie and yet the lie still sunk in and took root.
From their FAQ:
> Why did OpenAI choose to release an API instead of open-sourcing the models?
There are three main reasons we did this. First, commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts.
> Second, many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. We’re hopeful that the API will make powerful AI systems more accessible to smaller businesses and organizations.
> Third, the API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source
> The article even says it was a lie, is about the lie and yet the lie still sunk in and took root.
Thanks for pointing this out. I have a creepy feeling I'm going to be looked to and fooled by AI more and more often as my life goes by...
Don't worry, this is exactly the goal of AI. At the far ends it's literally impossible to tell and can be argued mathematically, to be just as real as our reality (q = p, learned distribution = real distribution)
Don't feel too bad, we live in interesting times. Thanks for posting the article it was a very interesting piece.
I feel like those of us born before the rise of AI authors will be fooled more than those after us....It's going to get tricky for sure and already is with social media ripe for fooling.
Interesting experiment, but the generated text is boring. I have read couple of paragraphs and scrolled down to get to the point. I imagine people who peddle this woo are more likely to get fooled. We know that even educated people can respond positively to complete nonsense. See doctor Fox effect.
They're probably trying to force the reader to see as many ads between paragraphs as possible.
With this technology, the human writer won't have to do so much of the work of generating bullshit between relevant points.
Definitely. I have been reviewing their demos in the meantime. Productivity applications like search, natural language bash, code completion, customer service, filling forms and summarising articles are far more interesting. Generating derivative articles or stories is not.
I certainly feel as if GPT-3 can replace 90%+ of generic content, inspirational posts without concrete advice, marketing, group affirming feel-good pieces, etc. However, this makes me think more about the limitations of human thought (or at least our default to laziness) than about how good this really is as AI.
Welcome to /r/Futurology! To maintain a healthy, vibrant community, comments will be removed if they are disrespectful, off-topic, or spread misinformation (rules). While thousands of people comment daily and follow the rules, mods do remove a few hundred comments per day. Replies to this announcement are auto-removed.
Isn't GPT-3 just a bigger GPT-2, and thus not really a particular step forward? Bitcoin, in comparison, didn't have anything like it before it came into existence, whereas GPT-3 hasn't really made quite so big of a leap forward.
>GPT-3 just a bigger GPT-2, and thus not really a particular step forward?
Nobody said it was a technological advance? The point is what GPT-3 can *do*. The stuff it can do compared to GPT-2 is mind-boggling. Saying it's "just bigger" is an over-simplification. We're talking about AI Models, sometimes the size is the most important part.