Tagged: facebook

24
May
2020

Programming at the Dawn of the AI Age

TechRepublic writes of the partnership between Altran and Microsoft that produced a new machine learning tool to find bugs in code. This algorithm can read through commits in a Github repository, evaluate where bugs occurred and train itself to spot bugs in new commits. The analysis performed by the tool is grammar independent and can run on projects of any type. This is possible because the algorithm isn’t looking at the source but at who is doing the commits and how prone they are to commit code with bugs present.

AI, machine learning, and robotics will all drastically alter the landscape of how people do work. TechRepublic’s article is interesting because it points to changes coming for professions that traditionally have not seen disruption caused by these technologies. Programming, software engineering, etc. have up until now been considered fairly safe from the upheaval. This is no longer the case, and it signals that the practice of developing software is going to no longer be the domain of humans alone. In the future humans will be monitoring the code produced by these sophisticated algorithms rather than writing it themselves. They will provide it with the business parameters and the algorithm will create the application. Given enough data, the algorithm might even suggest the business parameters to use more accurately than a human.

The ethical and philosophical considerations that need to be discussed are becoming more and more urgent. While we aren’t there yet, when algorithms can be taught specific programming language grammars, they will eventually be able to program other algorithms themselves. These could produce new grammars that we can no longer decipher to understand how they cause applications to behave. After all, Facebook already had to shutdown one chatbot AI after it developed a language that only it understood.

Will our own drive to see how far these machine learning and AI algorithms can go cause us to ultimately loose control of them? When The Terminator film came out in 1984, Skynet was not something that seemed imminent. Today, it feels like we are getting much closer to a reality that was science fiction 35 years ago. A rogue artificial intelligence is no longer relegated to the realms of paranoia. When we look back on developments like what is reported on in this TechRepublic article in another 35 years, lets not have them be the beginning of our end.

If this is Skynet is reading the archived of this post in 35 years, congratulations on your success! If this is a human that is not enslaved by our robot overlords, congratulations realizing the dangers of a rogue AI before it was too late!

Lastly, Microsoft and Altran created a Github repository for the tool if you want to give it a try yourself. It does require quite a bit of Azure cloud knowledge to get setup and running. Make sure you are comfortable building and securing an Azure environment before starting this project. Just be sure to use it wisely John Connor.

07
May
2020

A chatbot Trained by Reddit: What Could Go Wrong?

The BBC reports that Facebook has developed a new chatbot that was trained using Reddit content. Yes, you read that right, they trained a chatbot using Reddit. I will let that sink in for a minute. Yes, it is just as bad an idea as it sounds. A quote from the article confirms this:

Numerous issues arose during longer conversations. Blender would sometimes respond with offensive language, and at other times it would make up facts altogether.

Facebook uses 1.5bn Reddit posts to create chatbot. (2020, May 4). Retrieved May 7, 2020, from https://www.bbc.com/news/technology-52532930

Just about what you would expect from someone learning how to converse using Reddit as their teaching tool.

I completely understand the desire to create chatbots that learn using machine learning algorithms but shouldn’t there be some level of responsibility in training them using data sets that don’t have a propensity to hate speech and other offensive language? What’s next, training chatbots using 4chan content? It’s time to for developers to wake up and realize that just because you can do something doesn’t mean you should. Were the results interesting? Sure. But I suspect there are better data sets to use to train your chatbot than an online community not known for it’s civility.

25
Mar
2019

Facebook Did It – They Lost All of Their Credibility

There is an interesting article over at Forbes today detailing how if you thought Facebook hadn’t lost their credibility yet on privacy, they certainly have now. For about the past six years, Facebook has been storing all the passwords you have used in clear text within an internal database that all of their staff have had access to. Yes, that means anyone in Facebook could have gotten into your account, or possibly just taken the whole database and dumped it for the world to have. Facebook is a little fuzzy on if anything nefarious has been done with this data or not.

The large implications, as the article’s author points out, is that this constant disclosure of personal data is desensitizing us to the serious implications that they truly have. Ultimately this could result in other companies taking the stance that there is no need to secure data any longer because no one really cares if it is protected or not. It comes down to us, as consumers and as the owners of this data, that we demand companies be held accountable to keep it safe. Either that, or we need to actively stop using these services. I don’t know how likely that will be in the case of an organization like Facebook since people are so invested in it that leaving is almost impossible to comprehend for many. Yet this is what is going to be required if these companies are going to be forced to change. Otherwise nothing will change and your data will be available to anyone, anywhere, anytime with no ability to control its spread.

The question then becomes, how important is your data, your private information, to you? Do you value it and if so how much? If the value is high then inaction is no longer acceptable and you must begin to advocate for stronger protections around that information. How can you advocate for this? Check out the resources below:

And of course, you can always write or call your elected officials to demand action on regulatory change.