Is ID verification the answer to social media safety?

In less than a generation, we have seen social media grow from a direct electronic information exchange, to a virtual gathering place where millions of people worldwide can express their views and opinions. The use of social media and mobile devices present many advantages for students and when used correctly they can be a force for good. They create opportunities for collaborative learning and provide an accessible way for students to obtain resources, materials and course content, as well as supporting the easy interaction with mentors and peers. A study conducted by the University of Brussels has confirmed that online social media use for collaborative learning has a significant contribution to students’ academic performance and satisfaction

However, social platforms and regulatory authorities have weighed various considerations in how to combat issues with online abuse and coordinated manipulation, which are just some of the threats that come with living in an online world. EU regulations also focus on such aspects.

A proposal that’s repeatedly been raised is personal identity verification and the possibility of making this a mandatory element when creating a social media account. Industry experts have called on platforms to link user accounts to official ID to hold social media users accountable, and the IT industry believes this could be implemented without compromising personal privacy.

The problem with anonymity on the internet

While anonymity can contribute to the promotion of free speech, and a tool against discrimination, it also comes with many safety concerns such as online bullying, harassment and cybercrime to name a few.

While ‘hiding behind a screen’ may enable users to speak their mind freely, it also raises the question of online accountability, especially when it comes to threats like cyberbullying and/or defamatory content, online grooming, exploitation and blackmail.

This is particularly relevant after the Euro 2020 final when England players were subjected to a torrent of racial abuse fuelled by people remaining anonymous. In recent years, internet analysts and the public have expressed increasing concerns that the content, tone and intent of online interactions has undergone an evolution whereby uncivil and manipulative behaviours on the internet will persist – and may get worse.

Companies need to be accountable and be willing to place safety at core of their product and this is what EU regulations aim to do

With our online societies representing our real world, we need to start identifying where to put the right detection/regulations to tackle online harassment as we would do in real life. Companies need to be accountable and be willing to place safety at core of their product and this is what EU regulations aim to do. We can’t eradicate online harassment completely; we need to acknowledge that it exists and learn to prevent it and anticipate it, based on culture and social backgrounds. Accountability requires those responsible for any misconduct, be identified and brought to justice. Discrimination and other hate crimes are  punishable by law, but if people remain anonymous, by definition, they cannot be identified, making it impossible to put this into practice.

Social media platforms have a part to play 

Social networks have a responsibility to ensure security processes and standards are put in place so that they are safe spaces for everyone. All ages use social media, but none more so than Gen Z – the true digital natives who are as young as 13. Cyberbullying, especially, has been on the rise among young people during lockdown, with 24% of children estimated to experience some form of cyberbullying. When it comes to online abuse, we need more policy alignment on what qualifies as online abuse and toxicity as these can be very subjective notions, as well as clearer regulation and enforcement.

There are many sophisticated technological tools out there for social media platforms to employ to ensure they’re providing safe spaces for users and students to interact. Age estimation technology such as Yoti flags and blocks accounts where there’s a suspicion or doubt about a user’s age. AI is also a crucial element for effective and proactive monitoring of online behaviours. AI personal moderation features can be used across social networks to block words or phrases that users find offensive, harmful and triggering if they no longer wish to see them in comments or content on the app.

As social platforms, we have a duty to invest heavily in ​​educating our users about what constitutes a threat vs appropriate behaviour within online environments and give them the tools needed to act accordingly. We can all contribute to a safer online environment by developing social standards around online interactions, lead by example and stand up to harassment or any other unwanted online behaviour preventing online users from self-fulfilment and empowerment.

The next era of moderation

The UK government is debating a new law that would require people opening new social media accounts to prove their identity by providing a verified form of ID. According to recent research from BCS, more than half of tech experts (56%) believe linking social media accounts to true identities is technically achievable. The compulsory request of verification check could help reduce the creation of fake accounts which makes the spread of online hate a lot broader.

The UK government is debating a new law that would require people opening new social media accounts to prove their identity by providing a verified form of ID

Protecting students online is important to parents, teachers and social platforms alike and providing a verified form of ID to open accounts is part of a larger set of security and safety measures that we all need to consider

Teaching students about social media safety, privacy, security and digital literacy are crucial discussions to have, and educating children from a young age sets them up to make safer online choices in the future.

Anonymity breeds antisocial behaviour and platforms should make the digital world a better place by dispelling the darkness it creates. When used thoughtfully, social media can provide powerful opportunities for professional growth, enhanced communication, and conversations that allow learning to continue beyond the classrooms.

You might also like: Using smart building technology to create healthier learning environments


Source link

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">html</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


%d bloggers like this: