Connect with us

Opinion

Blaming Facebook and YouTube for amplifying the Christchurch shooter’s message lets the likes of 8chan off the hook

Published

on

Facebook and YouTube have been heavily criticised for their response to the Christchurch massacre in New Zealand after a video of the atrocity livestreamed by alleged shooter Brenton Tarrant was uploaded to their platforms hundreds of thousands of times. Despite only being viewed live by some 200 people when Tarrant broadcast his attack on 15 March, footage he filmed while slaughtering 50 Muslims at two mosques in the city was uploaded at least 1.5 million times to Facebook alone in the 24 hours that followed the deadly assault.

Four days after the attack, Facebook said it had identified over 800 different edits of the footage, and that video of the incident was at one point being uploaded to its platform as quickly as once every second. By the time evening had come on the day of the attack, the number of videos of the incident being uploaded to YouTube had forced the Google-owned platform to turn off a feature that allows users to search for recently-posted films, one of the company’s executives told the Washington Post.

Both YouTube and Facebook have been quick to defend themselves against allegations that they were too slow to take down versions of Tarrant’s footage that were uploaded to their platforms in the wake of the atrocity, noting that the online reaction to the first livestreaming of a major terrorist attack had been unprecedented. In spite of this, politicians including New Zealand Prime Minister Jacinda Ardern and UK Home Secretary Sajid Javid admonished the social media firms over how they dealt with footage of the shooting, with both lawmakers arguing that these companies have a responsibility to make sure their platforms are not used to amplify the voices of terrorists such as Tarrant.

While this is certainly true, it is curious as to why politicians and traditional media outlets spent so much time going after the likes of Facebook and YouTube after the attack, and so little asking questions about the shadowy image boards on which Tarrant and other far-right extremists like him are radicalised and share their ideas.

While major social media firms have done a reasonable job over the past few years of eradicating extremist Islamist material from their networks, pushing jihadi radicals on to smaller platforms and encrypted messaging services, they have not been quite as effective when it comes to the far right and white supremacists. Despite a perceived crackdown on conservative voices, which many on the right perceived as an attack on freedom of speech, Twitter, Facebook and others have found it more difficult to police far-right communities, not least due to the fact that they often communicate in coded in-jokes and are careful not to break websites’ rules and standards. In the most part however, any content that could be perceived as overtly extremist, racist or offensive is typically taken down from large social media platforms fairly quickly these days.

Conversely, posts on the image board on which Tarrant is said to have been radicalised are a cesspit of racism, antisemitism and homophobia, the types of which would not only be removed from Facebook, Twitter or even Gab, but would in some countries result in their creators being arrested for hate speech, should authorities be able to identify them. Almost immediately prior to launching his attack, Tarrant took to the “politics” pages of 8chan to publicise his planned atrocity, posting a link to the Facebook livestream on which he would broadcast the mass shooting moments later, and boasting of how he would target “invaders”.

Within minutes of Tarrant posting his message, fellow 8chan users, who almost without exception use the website anonymously, praised him for launching his attack. In the aftermath of the atrocity, many 8chan users took to the website’s pages to declare Tarrant a hero, while others labelled the shooting a Jewish conspiracy or false flag intended to further the Islamisation of the west. Users of 8chan and other sites like it, such as 4chan and Voat, also regularly heap praise on Anders Behring Breivik, the Norwegian white supremacist terrorist who killed 77 in two separate attacks in Oslo in 2011.

Astonishingly, 8chan and its ilk can be easily accessed via the surface web without using specialist browsers such as Tor, which is used to access sites on the dark web. While it may be the case that a number of New Zealand internet service providers blocked access to the likes of 4chan, 8chan and LiveLeak this month after they failed to take down footage of the Christchurch attack, one has to ask why these platforms are generally allowed to host such inflammatory content with apparent impunity when so much attention is paid to how more mainstream websites police their networks.

The fact that Tarrant was able to livestream his attack has quite correctly raised questions about the ethics of allowing people to broadcast anything they like on the internet in real time, but strangely, few are asking why racists, antisemites and homophobes are freely allowed to congregate and radicalise one another on sites such as 8chan with little fear of the content they post even being moderated. While big tech firms could certainly do more when it comes to preventing their networks being used to amplify the voices of extremists, in this instance, traditional media and politicians may have made a whipping boy of the likes of Facebook and Twitter when it is in fact 8chan and similar sites that are serving as recruitment grounds for a new breed of far-right terrorist.

Continue Reading

Opinion

Why organised criminal gangs are actively grooming teenagers to become the next generation of cyber hackers

Published

on

next generation of cyber hackers

More than two years have passed since Europol warned in its 2017 Serious and Organised Crime Threat Assessment that traditional organised crime networks had belatedly gone digital. It was noted at the time that these groups were increasingly turning to Crime-as-a-Service (CaaS) offers, which were being sold on the dark web by people with the technical skills required to make this happen. Fast forward 24 months, and it would appear that gang bosses may be becoming tired of having to rely on the CaaS business model whenever they need access to individuals with hacking skills. Last week, senior British police officers warned that organised crime gangs are now actively recruiting their own hackers, and are targeting teenage gamers on the autistic spectrum as part of their efforts to do so. Quoting research that suggests more than 80% of cyber criminals have a background in gaming, the National Police Chiefs’ Council (NPCC) launched a campaign intended to turn teenagers away from cyber crime, and encourage them to use their hacking skills for good. But noble as the initiative appears, it is unlikely to reverse a trend that is making teenage hackers the new elite of the organised criminal underworld.

It is not difficult to see why crime gangs are eager to secure the services of a new generation of young hackers. A slew of recent cases have demonstrated just how much money can be made from their skills, somewhat contacting a 2017 National Crime Agency (NCA) report that claimed young cyber criminals were more interested in the notoriety their activities garnered than any financial reward.

Earlier this month, 24-year-old Zain Qaiser was handed a six-year sentence by a British court after being found guilty of using malware to blackmail visitors to pornography websites. Between 2012 and 2014, the former computer science student is thought to have helped an organised criminal gang from Russia make millions of pounds by infecting adverts on legal adult websites with ransomware that demanded payments of up to $1,000 from victims. Prosecutors said Qaiser was personally paid more than £700,000 ($910,370) for his part in the scam, which he is said to have spent on prostitutes, luxury hotels, gambling and a Rolex watch. The NCA, which is often referred to as the UK’s equivalent of the FBI, described it as the most serious case of cyber crime it has investigated to date.

Just days later, an unemployed university drop-out from the city of Liverpool in the UK was sentenced to more than five years behind bars after being convicted of running the Silk Road 2.0 dark web illicit marketplace. Thomas White, 24, had helped run the original Silk Road until it was closed down by FBI investigators in 2013. Just one month after it was taken offline, White launched Silk Road 2.0, which like its predecessor was used by vendors to offer illicit items including drugs, weapons, cyber crime tools and stolen credit card details. While it is unknown how much money White personally made from creating the site, investigators estimated that it was used to sell illegal items worth $96 million, on which the former accounting student would take a commission of up 5%. White should consider himself lucky he is not in the position of Ross Ulbricht, the creator of the original Silk Road website, who was jailed for life with no chance of parole in 2015.

At the beginning of this year, police in Germany arrested a 19-year-old man in connection with a hacking incident that resulted in the personal details of politicians and celebrities being published on Twitter. In what was described as the largest such leak in the country’s history, documents including letters sent and received by German Chancellor Angela Merkel were dumped online in December of last year. The teenager, identified only as Jan S in line with Germany’s privacy laws, said that while he had been in contact with the hacker who leaked the documents, he played no part in obtaining them. Last August, a 16-year-old boy from Australia who said he dreamed of working for Apple pleaded guilty to hacking into the iPhone maker’s network and downloading 90 gigabytes of internal files. He was later spared jail when he was sentenced last September at the Australian Children’s Court, despite the offences of which he had been accused carrying a jail term of up to three years.

Prior to the invention of the internet, those who found themselves operating in the world of serious and organised crime did so largely as a consequence of their environment and the people around them. Now, hackers with the requisite skillset can carry out cyber crime activities involving huge amounts of money from their parents’ basement, without ever having to personally interact with their associates. While British police efforts to dissuade young people vulnerable to being groomed into becoming the next generation of cyber criminals are laudable, it is likely that many will find the money and notoriety on offer to major hackers more attractive than the prospect of working for the other side.

Continue Reading

Opinion

Banning begging would help human trafficking victims as well as the genuinely destitute

Published

on

banning begging will help human trafficking gang victims

A considerable number of experts on homelessness and poverty now agree that there are far better ways of helping vulnerable individuals who find themselves on the street than giving them money. Accepting the fact that any cash handed over in such circumstances will in all likelihood be spent on alcohol or drugs, professionals who work with the homeless and people who beg in city and town centres often advise that donating to charities that support vulnerable individuals is a far more productive way in which to help. Many people choose to ignore this advice, and generously hand over their hard-earned money to beggars with the very best of intentions, in many cases oblivious to the fact that their kindness could very well be doing more harm than good. Aside from supporting substance abuse and alcoholism among the destitute, those who do choose to give money directly to beggars could also be contributing to the profits of organised crime networks, and prolonging the suffering of modern slaves who are forced onto the streets to pose as being homeless in order to elicit sympathy from passers-by.

The large sums of money that can be made by beggars in many western nations has led to a rise in the phenomenon of forced begging, which involves organised criminal gangs compelling victims of human trafficking to assume the guise of homeless people and ask members of the public for cash handouts. In many cases, those who find themselves forced to work as bogus beggars are persuaded to leave a life of poverty in their home countries with the promise of well-paid work in wealthier locations. In a tactic used widely by traffickers who exploit people for the purposes of prostitution and other forms of forced labour, victims then find they have been lied to when they arrive in the country in which they had been promised work.

They are typically made to live in appalling conditions, are vulnerable to both physical and sexual abuse, and compelled to hand over all the money they make while begging to their traffickers. In Western Europe, those who end up working as forced beggars are typically drawn from poorer countries in the east of the continent. In the US, those forced into organised begging often have an unstable immigration status, or are American citizens who have physical or learning disabilities, according to US anti-slavery charity Polaris.

In October of last year, police in Spain dismantled a trafficking network that shipped disabled Romanians to the city of Santiago de Compostela and forced them to beg and act as human statutes in the street. The gang’s victims were convinced to leave their home country on the promise of receiving legitimate catering work, but once they arrived in Spain, were housed in appalling conditions and forced to beg on their knees regardless of the weather. If victims fell ill due to the horrifying circumstances in which they found themselves, members of the gang would beat them violently if they were unable to work.

The UK has become a major focus for organised begging gangs, partly on account of regular news reports claiming that beggars in major cities such as London can make many hundreds of pounds a day. Last month, a judge in Northern Ireland pledged to come down hard on any organised beggars who appeared before him in court, noting how gangs had been flying cells of bogus beggars into the province every six weeks. Jailing a woman from Bucharest for two months for stealing a bottle of vodka, Judge Barney McElholm made the pledge at Londonderry Magistrates Court, arguing that people such as the defendant were doing a great disservice to those who are genuinely homeless.

Members of a large Romanian organised begging gang were reported to have left Norway in April 2017 after a documentary screened by state broadcaster NRK exposed its members’ activities. Female members of the network were seen to spend their days begging on the streets of the southwestern city of Bergen, before working as prostitutes and stealing credit cards at night. Much of the proceeds of the gang’s illicit activities would then be sent back to Romania, news of which prompted Prime Minister Erna Solberg to urge Norwegians to consider whether it was a good idea to give money to people claiming to be homeless.

Norway has attracted criticism over recent years for daring to consider whether it might be desirable to ban begging, with those opposed to the idea labelling the wealthy country as “mean” for even making such as suggestion. But with more people coming round to the idea that handing over money to genuinely homeless people might be counterproductive, and evidence suggesting that many beggars on the street might not be what they seem, outlawing the practice of asking members of the pubic for money in the street might be the only way of protecting the vulnerable.

Continue Reading

Opinion

Tech giants have lost the chance to self-regulate after repeatedly failing to tackle harmful content

Published

on

social media giants have lost the chance to self-regulate

Nobody can say they were not warned. After years of big tech firms being told they must take concrete steps to prevent their platforms being used for the distribution and hosting of harmful content such as child abuse material and extremist propaganda, it appears governments around the globe have finally lost patience with their abject failure to do so. The livestreaming on Facebook of last month’s Christchurch terrorist atrocity in New Zealand seems to have been the straw that finally broke the camel’s back. In the aftermath of the deadly attack, during which gunman Brenton Tarrant used the social network to broadcast real-time footage of himself killing 50 Muslims at two mosques in the city, lawmakers in a number of countries have moved to make good on their threats of regulating online spaces.

For their part, owners of social media companies appear to have recently sensed the writing is on the wall, with a number seemingly accepting that greater regulation of their platforms has now become all but inevitable. In a move that some have framed as being more about deflecting blame for his company’s inability to police harmful content than anything else, Facebook boss Mark Zuckerberg last month used an opinion piece for the Washington Post to tell readers that governments and regulators have a “more active role” to play in holding tech firms to account when it comes to removing potentially harmful material. Echoing Zuckerberg’s thoughts just days later in an interview with Bloomberg, Twitter CEO Jack Dorsey called for improved government oversight of social media networks, telling reporter Jon Erlichman: “It’s the job of regulators to ensure protection of the individual and a level playing field.” Back in January, Salesforce CEO Marc Benioff told CNBC that the threat posed by Facebook, Google and Twitter should be treated as a public health issue, arguing they should be regulated in much the same way as tobacco and sugar.

And so it has come to pass. Less than a month after events in Christchurch, the UK government this week published plans to set up a new online regulator that could have the power to issue substantial fines to social media firms that fail to remove harmful content in a timely manner. The new watchdog may also be able to hold social media executives personally accountable for any such incidents, and would be charged with ensuring that these companies fulfil their duty of care to users. Launching the Online Harms White Paper on Monday, British Home Secretary Sajid Javid said: “[W]e cannot allow the leaders of some of the tech companies to simply look the other way and deny their share of responsibility even as content on their platforms incites criminality abuse and even murder.” The UK government will now consult on the contents of the White Paper until 1 July.

Lawmakers in Australia have moved even more swiftly, last week rushing through new legalisation that could see managers of social media firms jailed if their platforms are used for the livestreaming of real-life violent content. Under the new rules, social media managers could face three years behind bars and a large fine. It is looking increasingly likely that authorities in New Zeeland might introduce similar legalisation, with the country’s Privacy Commissioner John Edwards this week tweeting: “[Platforms such as Facebook] allow the livestreaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target ‘Jew haters’ and other hateful market segments, and refuse to accept any responsibility for any content or harm.” Even in the US, where First Amendment rights to freedom of speech make it more difficult to regulate the dissemination of some online content, representatives from Facebook and Google this week appeared before a congressional hearing on white nationalism and hate speech on social media platforms.

While some commentators have welcomed the fact that the so-called “online Wild West” may finally be coming to an end, there are serious concerns that the type of legislation currently being considered in the UK could have grave implications when it comes to freedom of speech, not least on account of the fact that it appears future governments might be able to change the definition of what can and cannot be published online. Worries have also been raised that increased regulation could be bad for competition, with only the wealthiest of social media companies having deep enough pockets to cover the cost of operating within the confines of complicated new rules.

So, having profited massively from a decades-long period during which they were able to repeatedly dodge calls for responsible self-regulation, it has now become expedient for tech giants such as Facebook, Twitter and Google to capitulate to these demands, ushering in a new era that could see both competition and free speech stifled on account of their past greed and failure to act responsibly.

 

Continue Reading

Newsletter

Sign up for our mailing list to receive updates and information on events

Social Widget

Latest articles

Press review

Follow us on Twitter

Trending

Shares