Popis: |
Growing evidence shows that proactive content moderation supported by AI can help improve online discourse. However, we know little about designing these systems, how design impacts efficacy and user experience, and how people perceive proactive moderation across public and private platforms. We developed a mobile keyboard with built-in proactive content moderation which we tested (N=575) within a semi-functional simulation of a public and private communication platform. Where toxic content was detected, we used different interventions that embedded three design factors: timing, friction, and the presentation of the AI model output. We found moderation to be effective, regardless of the design. However, friction was a source of annoyance while prompts with no friction that occurred during typing were more effective. Follow-up interviews highlight the differences in how these systems are perceived across public and private platforms, and how they can offer more than moderation by acting as educational and communication support tools. |