Tagged: facebook


Sorry Facebook: It’s Not Me, It’s You

I have been a Facebook user for 15 years but that came to an end today. My relationship with Facebook started the year I went to college, back when you had to actually be in College and at an approved school to join the platform. Back then it was just a lot of college students sharing really stupid stuff with each other. It was fun, mindless, and entertaining. Fast forward 15 years and my relationship with Facebook has become toxic. It is no longer any fun, it routinely causes me to get angry, and overall makes me more depressed after looking at my account.

Once Facebook became open to all, and the billions of users flooded in, it quickly became a way to flame and troll each other electronically. People realized that they could hide behind their computer and never face the people they were writing to and it became a complete cesspool. They began spewing all of their pent up anger, hate, bias, conspiracy theories, lies, and more without a second thought about how wrong they were or who they might hurt. Today, for me at least, that is no longer a part of my life.

I deleted my Facebook account.

I realized that I was less happy each time I looked at my news feed. I was tired of being caught in the political echo chamber that the platform has become. I was tired of the constant negative posts by the pages and people I was connected to. I was tired of the constant distraction that it caused throughout each day.

As I began to think about it, it sounded like I was describing an abusive relationship and not a social media platform. Once that sank in I seriously began to question why I still had an account. Then I started looking at news stories like these:

The final straw was when I read an article on Business Insider titled “There has never been a better time to quit Facebook.” It’s not a long read, but it gets straight to the same point I had come to on my own: Facebook has become a platform that amplifies the voices of the uniformed and malicious. Facebook knows that if they begin to alienate these people it will eventually affect their bottom line. Fewer users equals less revenue, and less revenue means unhappy investors. It became obvious that I had no need for a service that only served to induce stress and anxiety.

So I downloaded my content, told my family to find me elsewhere, and deleted my account.

Will the deletion of my account make any difference to Facebook or their bottom line? No. I was just another number to them, a jumble of data stored formatted as JSON on a server somewhere. Do I care if anyone else deletes their account from the platform? No. If you like Facebook then keep using it. I don’t expect you to follow my lead if that is the case.

But maybe, just maybe, after reading this post you realize that Facebook or some other social media platform makes you feel the same way I did. If that is the case then I encourage you to examine the reasons why you keep going back to something that makes you so unhappy. If there is no compelling reason, maybe it is time to break up with it like I did with Facebook.


Programming at the Dawn of the AI Age

TechRepublic writes of the partnership between Altran and Microsoft that produced a new machine learning tool to find bugs in code. This algorithm can read through commits in a Github repository, evaluate where bugs occurred and train itself to spot bugs in new commits. The analysis performed by the tool is grammar independent and can run on projects of any type. This is possible because the algorithm isn’t looking at the source but at who is doing the commits and how prone they are to commit code with bugs present.

AI, machine learning, and robotics will all drastically alter the landscape of how people do work. TechRepublic’s article is interesting because it points to changes coming for professions that traditionally have not seen disruption caused by these technologies. Programming, software engineering, etc. have up until now been considered fairly safe from the upheaval. This is no longer the case, and it signals that the practice of developing software is going to no longer be the domain of humans alone. In the future humans will be monitoring the code produced by these sophisticated algorithms rather than writing it themselves. They will provide it with the business parameters and the algorithm will create the application. Given enough data, the algorithm might even suggest the business parameters to use more accurately than a human.

The ethical and philosophical considerations that need to be discussed are becoming more and more urgent. While we aren’t there yet, when algorithms can be taught specific programming language grammars, they will eventually be able to program other algorithms themselves. These could produce new grammars that we can no longer decipher to understand how they cause applications to behave. After all, Facebook already had to shutdown one chatbot AI after it developed a language that only it understood.

Will our own drive to see how far these machine learning and AI algorithms can go cause us to ultimately loose control of them? When The Terminator film came out in 1984, Skynet was not something that seemed imminent. Today, it feels like we are getting much closer to a reality that was science fiction 35 years ago. A rogue artificial intelligence is no longer relegated to the realms of paranoia. When we look back on developments like what is reported on in this TechRepublic article in another 35 years, lets not have them be the beginning of our end.

If this is Skynet is reading the archived of this post in 35 years, congratulations on your success! If this is a human that is not enslaved by our robot overlords, congratulations realizing the dangers of a rogue AI before it was too late!

Lastly, Microsoft and Altran created a Github repository for the tool if you want to give it a try yourself. It does require quite a bit of Azure cloud knowledge to get setup and running. Make sure you are comfortable building and securing an Azure environment before starting this project. Just be sure to use it wisely John Connor.


A chatbot Trained by Reddit: What Could Go Wrong?

The BBC reports that Facebook has developed a new chatbot that was trained using Reddit content. Yes, you read that right, they trained a chatbot using Reddit. I will let that sink in for a minute. Yes, it is just as bad an idea as it sounds. A quote from the article confirms this:

Numerous issues arose during longer conversations. Blender would sometimes respond with offensive language, and at other times it would make up facts altogether.

Facebook uses 1.5bn Reddit posts to create chatbot. (2020, May 4). Retrieved May 7, 2020, from https://www.bbc.com/news/technology-52532930

Just about what you would expect from someone learning how to converse using Reddit as their teaching tool.

I completely understand the desire to create chatbots that learn using machine learning algorithms but shouldn’t there be some level of responsibility in training them using data sets that don’t have a propensity to hate speech and other offensive language? What’s next, training chatbots using 4chan content? It’s time to for developers to wake up and realize that just because you can do something doesn’t mean you should. Were the results interesting? Sure. But I suspect there are better data sets to use to train your chatbot than an online community not known for it’s civility.


Facebook Did It – They Lost All of Their Credibility

There is an interesting article over at Forbes today detailing how if you thought Facebook hadn’t lost their credibility yet on privacy, they certainly have now. For about the past six years, Facebook has been storing all the passwords you have used in clear text within an internal database that all of their staff have had access to. Yes, that means anyone in Facebook could have gotten into your account, or possibly just taken the whole database and dumped it for the world to have. Facebook is a little fuzzy on if anything nefarious has been done with this data or not.

The large implications, as the article’s author points out, is that this constant disclosure of personal data is desensitizing us to the serious implications that they truly have. Ultimately this could result in other companies taking the stance that there is no need to secure data any longer because no one really cares if it is protected or not. It comes down to us, as consumers and as the owners of this data, that we demand companies be held accountable to keep it safe. Either that, or we need to actively stop using these services. I don’t know how likely that will be in the case of an organization like Facebook since people are so invested in it that leaving is almost impossible to comprehend for many. Yet this is what is going to be required if these companies are going to be forced to change. Otherwise nothing will change and your data will be available to anyone, anywhere, anytime with no ability to control its spread.

The question then becomes, how important is your data, your private information, to you? Do you value it and if so how much? If the value is high then inaction is no longer acceptable and you must begin to advocate for stronger protections around that information. How can you advocate for this? Check out the resources below:

And of course, you can always write or call your elected officials to demand action on regulatory change.