So lets see what is GPT-2, a younger brother or sister of GPT-3 has to say about our future with artificial intelligence. The entire text is written by GPT-2 without my correction.


To me the question is, in what way is there a danger in artificial intelligence as a solution? There is a danger in using artificial intelligence as a tool for the development of science fiction. A danger is the temptation to take advantage of a technology, to use it for a purpose beyond what’s good and safe and what’s the right thing to do.
The danger is to use an algorithm to predict the future. An example of the danger is the use of algorithms in law enforcement, that is the use of such algorithms for law enforcement purposes. So, one of the dangers is to take advantage of the technology. The question then is, does it have an interest in that? Is it an interested entity in the technology? Then comes the tricky question, if you start to use it in that way, will that end up in harm? Is the technology involved in doing anything better than what has come before? The question is, will we have our technological competence on that? If not, how will we be able to replace it with new technologies if it starts using that way? The question becomes, will it also use a dangerous technology to solve the issues that are important to us? And in that context of a technology’s usefulness, if it really had an interest, what would it be trying to do? And that is part of the whole problem with all of these systems. There is some science fiction which has shown that a computer program can take care of something you have never thought possible. This computer program knows everything you have ever asked it to say about your background or your family background. And this program can give you some things which no previous person has ever given you about your family history or your past. It can take care of the things you want to know, and it can take care of the things that might seem unimportant. It may even be able to even tell you where you got the most money from. But it still does not know everything you have ever thought about your family and past. And you will not be able to tell it why your father or your mother died, where you live or what you want to do when you grow up. So how can you tell that computer program’s interests are something like that? It can just be that it has an interest in certain topics. Or it could be that it has a particular interest that you think would work for you. Or it could be that it is trying to answer your question. It can either tell you how you want to go or get it to tell you how you want to go. Its interests in particular topics will be very similar to your interests. Its interests will be as it knows them to be. You can either tell your children what that interests is, or you can let it know. It will try to make sure you tell it. And it will try to teach you. If it is trying to be a teacher, it will be in a way as a teacher. If it is trying to be an attorney, it will be in a way as an attorney. If it is trying to help you, it will have to come up with ways to help you. If it is trying to help you make the right decisions, it will be in a way as a rationalist. If it is trying to be a philosopher, it will be in a way as a philosopher. You must know what your interests are. Now there are ways of telling a computer what you want it to do. For example you might be asking it what its interests are in particular topics. You will tell it if you want the computer to talk to you, if you want it to respond to your voice, or if you want it to try to do something for you.


Now, we have many questions to ask our selves(humans). Some of them are what will occur if we loose control when one AI algorithm for example begins teaching other AI algorithm in a bad way, and everything is stored in the cloud? Where is the plug? AI can predict stock market and create stories over social networks that will cause panic in order to make gains for corporations that will implement such systems. So we will need AI intelligence software to solve AI problems, like some kind of AI integrity radar solution that will monitor and look for “bad AI boys”. Huh..it’s getting interesting. We are experiencing technological revolution greater and faster than any we had in our human history.