měi guó美国biān jìng边境de的rén gōng zhì néng人工智能yǔ与zhǒng zú qí shì种族歧视
Artificial intelligence is now used in many places, and it is becoming more common at the U.S. border too.
However, some human rights organizations say these systems are often less fair to Black immigrants and other immigrants of color.
For example, surveillance towers and drones at the border keep tracking people, treating many people who are looking for a safe life as dangerous, which pushes them onto more dangerous routes and increases the risk of death.
The CBP One app used before also had problems: sometimes it could not recognize the faces of people with darker skin, and it did not have translations for some languages commonly used by Black immigrants.
After entering the United States, some systems also give immigrants a “risk score” to decide who should be watched more closely, but these standards are not public and are very hard to challenge.
Some AI systems also check asylum applications and evidence, which may disadvantage people who do not speak English or whose documents are unusual.
Many groups believe that when the United States uses AI, it should first make sure there is no racial discrimination, make the rules public, let affected people know the reasons, and allow them to give feedback and file appeals.