AI technology to fight Online Harassment faster

A black background with colorful speech bubbles, containing censored insults and curses.

With the CS-2 system from Cerebras, AI models for the recognition of hate speech on social media platforms can be trained faster, and offensive texts can be detected more effectively. This is shown by a study from the LRZ that compares different AI accelerator systems.

Identifying hate speech and removing it from the digital world: This is a task that social media platforms and online media need to master as quickly as possible. Artificial intelligence (AI), in particular and pre-trained large language models (LLMs), are helping them to do this. Computing power provides additional advantage – especially clusters equipped with graphical processing units (GPUs) or AI systems such as the CS-2 from Cerebras Systems, whose chip, the Wafer Scale Engine 2, was specifically designed to meet the needs of LLMs.

In a study, researchers at the Leibniz Supercomputing Centre (LRZ) compared the performance and effort of different AI technologies when implementing and fine-tuning language models: “Compared to classic training setups, the specialised AI accelerator from Cerebras speeds up training times by a factor of four,” reports Dr Michael Hoffmann, a specialist in big data and AI at the LRZ. “However, the Cerebras system is very new, so considerable effort is required for preparation or compilation, for example, and only a limited number of language models can be transferred to the CS-2 system.”


Read more about the hardware installment in the press releases of the LRZ.