So-called ‘homosexual face’ has been within the media once more this week after a video went viral claiming it existed.
YouTube science lecturers Mitch Moffit and Greg Brown cited controversial analysis that discovered homosexual individuals have completely different bodily options than their straight counterparts.
Their claims that AI could possibly be skilled to recognise somebody’s sexuality have been picked up in newspaper stories – however specialists within the subject mentioned they strongly doubted this was dependable.
Dominic Lees, a professor specialising in AI on the College of Studying, mentioned Moffit and Brown had not carried out any unique analysis, however had solely reviewed earlier research.
He advised Metro: ‘These research have clearly not been peer-reviewed. A tutorial evaluation of the work would level out that each picture proven is of a white particular person’s face, regardless of the report’s claims to make common observations about “homosexual face”.
‘On this challenge alone, the report can’t be trusted. Physiognomy varies enormously with ethnicity, ruling out any try and make generalisations on sexuality.’
Within the video on their YouTube channel ‘AsapSCIENCE’, Moffit and Brown mentioned prior analysis discovered homosexual males had shorter noses and bigger foreheads, whereas lesbians have ‘upturned noses and smaller foreheads’.
They referred to this phenomenon as ‘homosexual face’ — the idea that homosexuals have sure facial traits in widespread.
However the analysis they highlighted has been critiqued prior to now, with critics calling it ‘harmful’ and ‘junk science’.
Cybersecurity skilled James Bore advised Metro that research like these include a variety of moral and accuracy points, together with potential biases in AI.
Mr Bore mentioned: ‘We don’t know what knowledge they’ve included or what knowledge they’ve used, how they’ve skilled the mannequin or the assumptions which have been utilized. We don’t understand how they chose the info or in the event that they cherry picked it.
‘This data ought to be included within the element of the particular publication, however typically they aren’t or typically they’re glossed over.
‘There’s been this view that AI is infallible, that simply saying “we used an AI mannequin” means that is completely correct, the place really what we’ve seen time and time once more, fashions not solely keep it up human biases however enshrine them in an authoritative manner.
‘It’s junk science, it’s superstition, and we would not have the info to say whether or not there’s something to it or not.’
And even when AI just isn’t concerned, there’s nonetheless the query of ethics.
Mr Bore defined: ‘There are points round prejudices, round outing individuals who don’t need to be outed or figuring out individuals who could not need to be recognized as a part of a specific group for no matter purpose.’
A controversial historical past
Researchers have beforehand tried to determine whether or not or not it’s potential to inform somebody’s sexuality based mostly on their face — and have been closely criticised for it.
In 2017, an AI mannequin from Stanford College was criticised for utilizing images from courting apps to discern if somebody was homosexual or straight, based mostly on their facial options and sexual choice on the app.
The researchers behind Stanford’s mannequin later described criticism of their mannequin as a ‘knee-jerk response’.
However Mr Bore identified the risks of taking this type of examine at face worth.
He mentioned: ‘Folks have been persecuted and died prior to now as a result of this kind of analysis has been used to establish individuals as a part of a gaggle, after which they’ve been imprisoned, killed, pushed out of nations.
‘However we now have knee jerk reactions for a purpose, and anybody concerned on this examine actually must cease and suppose and think about the potential penalties, particularly in the event that they’re going to launch the mannequin.
‘We have now nations the place being homosexual is a prison offence.
‘Any expertise or facial research which declare to have the ability to establish somebody’s sexuality based mostly on their face in these nations goes to be abused.’
In 2023, it was revealed that the UK plans to separate accountability for governing synthetic intelligence (AI) between its regulators for human rights, well being and security, and competitors, relatively than creating a brand new physique devoted to the expertise.
Extra Trending
Learn Extra Tales
AI, which is quickly evolving with advances such because the ChatGPT app, may enhance productiveness and assist unlock development.
However there are considerations in regards to the dangers it may pose to individuals’s privateness, human rights or security, the federal government mentioned.
With the intention of putting a steadiness between regulation and innovation, the federal government plans to make use of present regulators in numerous sectors relatively than giving accountability for AI governance to a brand new single regulator.
It mentioned that over the following 12 months, present regulators would challenge sensible steering to organisations, in addition to different instruments and assets like danger evaluation templates.
Get in contact with our information group by emailing us at webnews@metro.co.uk.
For extra tales like this, verify our information web page.
MORE : Nigel Farage’s Reform screened AI political broadcast – however ought to they’ve completed?
MORE : First look inside UK’s first carbon destructive pub the place you may get a half value cab house
MORE : College students use good glasses to ID strangers with out them realizing
Get your need-to-know
newest information, feel-good tales, evaluation and extra
This website is protected by reCAPTCHA and the Google Privateness Coverage and Phrases of Service apply.