TechRepublic writes of the partnership between Altran and Microsoft that produced a new machine learning tool to find bugs in code. This algorithm can read through commits in a Github repository, evaluate where bugs occurred and train itself to spot bugs in new commits. The analysis performed by the tool is grammar independent and can run on projects of any type. This is possible because the algorithm isn’t looking at the source but at who is doing the commits and how prone they are to commit code with bugs present.
AI, machine learning, and robotics will all drastically alter the landscape of how people do work. TechRepublic’s article is interesting because it points to changes coming for professions that traditionally have not seen disruption caused by these technologies. Programming, software engineering, etc. have up until now been considered fairly safe from the upheaval. This is no longer the case, and it signals that the practice of developing software is going to no longer be the domain of humans alone. In the future humans will be monitoring the code produced by these sophisticated algorithms rather than writing it themselves. They will provide it with the business parameters and the algorithm will create the application. Given enough data, the algorithm might even suggest the business parameters to use more accurately than a human.
The ethical and philosophical considerations that need to be discussed are becoming more and more urgent. While we aren’t there yet, when algorithms can be taught specific programming language grammars, they will eventually be able to program other algorithms themselves. These could produce new grammars that we can no longer decipher to understand how they cause applications to behave. After all, Facebook already had to shutdown one chatbot AI after it developed a language that only it understood.
Will our own drive to see how far these machine learning and AI algorithms can go cause us to ultimately loose control of them? When The Terminator film came out in 1984, Skynet was not something that seemed imminent. Today, it feels like we are getting much closer to a reality that was science fiction 35 years ago. A rogue artificial intelligence is no longer relegated to the realms of paranoia. When we look back on developments like what is reported on in this TechRepublic article in another 35 years, lets not have them be the beginning of our end.
If this is Skynet is reading the archived of this post in 35 years, congratulations on your success! If this is a human that is not enslaved by our robot overlords, congratulations realizing the dangers of a rogue AI before it was too late!
Lastly, Microsoft and Altran created a Github repository for the tool if you want to give it a try yourself. It does require quite a bit of Azure cloud knowledge to get setup and running. Make sure you are comfortable building and securing an Azure environment before starting this project. Just be sure to use it wisely John Connor.