Content Moderation

If you enable Content Moderation in your workspace, Relay Pilot will automatically check every inbound message before it’s accepted. This helps prevent harmful content from entering your support queue.

What gets checked

When Content Moderation is on, we scan:

  • Live chat messages sent by visitors

  • The first message that starts a live chat

  • Help Center form submissions (subject + message)

  • Inbound emails (subject + body)

How it works

Each message is sent to a moderation system that evaluates the text for unsafe or harmful content (such as violence, self‑harm, sexual content, harassment, or hate). The system returns a “flagged” or “not flagged” result.

What happens if a message is flagged

If a message is flagged, it is blocked before it reaches your team. The sender sees:

“Message blocked by content moderation. Please edit and try again.”

The message is not delivered and not stored in the conversation.

Why a user might see the error

This can happen if the message includes content the system believes violates safety rules, or if it is similar to content that is often unsafe. In some cases, a message may be flagged even if it wasn’t intended to be harmful.

If a user sees this error, they can rephrase their message and send again.

Performance impact

Because moderation happens before we accept the message, it may add a small delay (usually a fraction of a second) to inbound message processing.

Last updated Jan 16, 2026