The IWF Hash List contains a special catalogue of codes, or hashes which is updated daily and manually verified by our expert analysts. Each hash is completely unique. It’s a type of digital fingerprint, or label that identifies a picture of confirmed child sexual abuse. Each criminal image has its own individual hash.
Once an image has been hashed, it can be recognised quickly. Better still, our list of hashes can block thousands of criminal pictures from ever being uploaded to the internet in the first place.
By using our Hash List, tech companies can stop criminals from uploading, downloading, viewing, sharing or hosting known images and videos showing child sexual abuse.
Once our analysts have assessed an image or video as criminal, they give it a hash. That hash is loaded onto our Hash List. From there the list can be used by organisations who work with images or videos. As each hash refers to a fully assessed criminal image, that company is able to block the image from being uploaded to its network.
Today, our Hash List contains hundreds of thousands of hashes, or unique digital fingerprints and we add to it daily. We never forget that each one is an image of a real child being sexually abused; an online record of suffering and pain; the documentation of a crime scene and a place of terror for that child.
By using the Hash List, tech companies can protect customers, staff and users from accidentally stumbling on these criminal images. They can help stop perpetrators profiting from their horrific crimes. They can defend victims, the children in the pictures, from having a record of their abuse viewed, shared or traded again and again. They can protect their platform from hosting these images. They can do the right thing.
For more information about joining us and using our Hash List, take a look at IWF membership, submit a membership inquiry or get in touch with our team - email [email protected] or call +44 (0)1223 20 30 30.
Sadly, if people can upload, download, view or host images and videos on a tech platform or service, then it could be abused by criminals. But, it’s also true that the Hash List could be used to protect that service from inadvertently hosting child sexual abuse imagery.
The Hash List can work in a preventative way. It can stop the abuse of legitimate websites, services or platforms by child sexual abuse perpetrators. It makes networks safer for customers and employees. It protects victims of child sexual abuse from the trauma of having a record of their abuse circulated online. It gives them peace of mind.
Every criminal image we assess is hashed by an experienced analyst. This covers the two formats we provide the List in: Microsoft’s PhotoDNA; and a more traditional, MD5.
Licenced Members can download the list in two ways. Firstly, via our API, avoiding the need for additional, external hardware and making access to our list even more secure.
Secondly, we’ve teamed up with Microsoft so that companies can use PhotoDNA to access our Hash List, with a cloud-based automated system. We know it’s impossible for most companies to manually scan every image and video that’s uploaded onto their platforms. So, our Hash List does this in the background, without affecting the experience of users.
It means companies can compare anything from a single image or video, to the millions of images and videos uploaded through platforms daily. The Hash List can flag criminal images and videos already on systems and prevent new ones being uploaded.