AI Transparency Policy

AI Transparency Policy

Last Updated: March 2026

RealScroll is designed to increase transparency around digital media authenticity. As artificial intelligence generated and manipulated media becomes increasingly common across the internet, RealScroll provides tools that help users better understand the nature of the content they view.

This policy explains how RealScroll approaches artificial intelligence generated media, manipulated content, deepfakes, and authenticity indicators on the platform.

Media Analysis

Photos and videos uploaded to RealScroll may be analyzed using automated detection systems designed to identify indicators of artificial generation or digital manipulation.

These systems analyze certain characteristics within media files to help determine whether content may contain elements associated with synthetic or AI generated media.

Media analysis may occur automatically when content is uploaded or when content is reviewed by platform systems.

These analyses are used to support transparency tools, moderation systems, and informational indicators displayed within the platform.

Detection Technology

To help analyze uploaded media, RealScroll may use automated moderation and detection technologies designed to identify potential indicators of artificial or manipulated content.

These systems may include technologies developed internally as well as third party technologies provided by trusted partners. For example, RealScroll may use analysis tools provided by technology partners such as Sightengine.

Sightengine’s technology analyzes certain characteristics of images and videos to detect signals that may suggest synthetic generation, manipulation, or other forms of artificial media.

Detection systems may evaluate signals such as:

Pixel level manipulation patterns
Visual inconsistencies associated with synthetic generation
Deepfake related indicators
Media artifacts commonly associated with AI generation
Other characteristics associated with altered or generated media

The results of these analyses may be used to generate transparency indicators or moderation signals within the platform.

Transparency Indicators

When media is analyzed by automated systems, RealScroll may provide transparency indicators to help users better understand the characteristics of the content they are viewing.

These indicators may appear within the platform interface and may provide contextual information about the media.

Examples of transparency indicators may include:

Possible AI Generated Media
Synthetic Media Indicators Detected
No AI Indicators Detected
Media Analysis Result

These indicators are intended to provide additional context and transparency regarding uploaded content.

They are informational in nature and should not be interpreted as definitive conclusions regarding the authenticity of any media.

Accuracy and Limitations

Artificial intelligence detection technologies are still evolving and may not always produce perfectly accurate results.

Detection results represent probabilistic analysis based on available signals, training data, and machine learning models.

As a result:

Detection systems may produce false positives
Detection systems may produce false negatives
Detection results may change as detection technology improves
Detection outputs may not reflect the full context of how content was created

Results generated by automated systems should not be interpreted as definitive proof that media is authentic, artificially generated, or manipulated.

RealScroll does not certify, guarantee, or represent the authenticity of user generated content.

Users should evaluate content responsibly and understand that automated detection technologies have inherent limitations.

Human Review

In certain cases, automated analysis may not be able to determine content characteristics with sufficient confidence.

When this occurs, RealScroll may perform additional review processes. This may include moderation review conducted by trained reviewers.

Human review may be used to determine appropriate platform actions, including applying labels, restricting visibility, removing content, or allowing content to remain visible with transparency indicators.

Deepfakes and Manipulated Content

Content that intentionally impersonates individuals or deceptively alters reality in a harmful or misleading manner may violate RealScroll platform rules.

Examples include:

Deepfakes designed to impersonate individuals
Manipulated media intended to mislead viewers about real world events
Synthetic content used to harass or defraud others
Content designed to falsely represent people or situations

Content that violates these rules may be labeled, restricted, removed, or subject to account enforcement actions.

Explicit or Sensitive Synthetic Content

Some synthetic or manipulated media may also fall under RealScroll’s Explicit Content or Sensitive Content policies.

If media contains graphic or disturbing imagery, the platform may apply additional safeguards including:

Content blur screens
Explicit content warnings
Visibility restrictions
User confirmation prompts before viewing

These safeguards are designed to protect users from unexpected exposure to sensitive material.

User Responsibility

Users are responsible for ensuring that the content they upload complies with applicable laws and RealScroll platform policies.

Users should not upload content that intentionally deceives viewers, impersonates others, spreads manipulated misinformation, or violates the rights of others.

Users remain solely responsible for the content they upload and distribute through the platform.

Platform Responsibility and Limitations

RealScroll provides tools designed to increase transparency around digital media.

However, RealScroll does not guarantee that its detection systems can accurately identify all synthetic or manipulated content.

RealScroll does not certify the authenticity of any user generated content on the platform.

Detection systems, moderation tools, and transparency indicators are provided as informational tools only.

RealScroll is not responsible for user generated content and does not verify the accuracy or authenticity of every post uploaded to the platform.

Commitment to Transparency

RealScroll is committed to improving transparency around digital media and continuing to evolve detection tools as technology advances.

Our goal is to help users better understand the content they see online while maintaining a fair, open, and responsible platform for digital expression.

As artificial intelligence technology continues to develop, RealScroll may update its systems, transparency tools, and policies to reflect new developments.

Policy Updates

RealScroll may update this policy periodically to reflect changes in technology, platform functionality, or applicable law.

Users are encouraged to review this policy periodically for updates.

Contact

If you have questions about RealScroll moderation or transparency practices, you may contact:

team@realscroll.com

For copyright related matters, please contact:

copyright@realscroll.com

Last Updated: March 2026

RealScroll is designed to increase transparency around digital media authenticity. As artificial intelligence generated and manipulated media becomes increasingly common across the internet, RealScroll provides tools that help users better understand the nature of the content they view.

This policy explains how RealScroll approaches artificial intelligence generated media, manipulated content, deepfakes, and authenticity indicators on the platform.

Media Analysis

Photos and videos uploaded to RealScroll may be analyzed using automated detection systems designed to identify indicators of artificial generation or digital manipulation.

These systems analyze certain characteristics within media files to help determine whether content may contain elements associated with synthetic or AI generated media.

Media analysis may occur automatically when content is uploaded or when content is reviewed by platform systems.

These analyses are used to support transparency tools, moderation systems, and informational indicators displayed within the platform.

Detection Technology

To help analyze uploaded media, RealScroll may use automated moderation and detection technologies designed to identify potential indicators of artificial or manipulated content.

These systems may include technologies developed internally as well as third party technologies provided by trusted partners. For example, RealScroll may use analysis tools provided by technology partners such as Sightengine.

Sightengine’s technology analyzes certain characteristics of images and videos to detect signals that may suggest synthetic generation, manipulation, or other forms of artificial media.

Detection systems may evaluate signals such as:

Pixel level manipulation patterns
Visual inconsistencies associated with synthetic generation
Deepfake related indicators
Media artifacts commonly associated with AI generation
Other characteristics associated with altered or generated media

The results of these analyses may be used to generate transparency indicators or moderation signals within the platform.

Transparency Indicators

When media is analyzed by automated systems, RealScroll may provide transparency indicators to help users better understand the characteristics of the content they are viewing.

These indicators may appear within the platform interface and may provide contextual information about the media.

Examples of transparency indicators may include:

Possible AI Generated Media
Synthetic Media Indicators Detected
No AI Indicators Detected
Media Analysis Result

These indicators are intended to provide additional context and transparency regarding uploaded content.

They are informational in nature and should not be interpreted as definitive conclusions regarding the authenticity of any media.

Accuracy and Limitations

Artificial intelligence detection technologies are still evolving and may not always produce perfectly accurate results.

Detection results represent probabilistic analysis based on available signals, training data, and machine learning models.

As a result:

Detection systems may produce false positives
Detection systems may produce false negatives
Detection results may change as detection technology improves
Detection outputs may not reflect the full context of how content was created

Results generated by automated systems should not be interpreted as definitive proof that media is authentic, artificially generated, or manipulated.

RealScroll does not certify, guarantee, or represent the authenticity of user generated content.

Users should evaluate content responsibly and understand that automated detection technologies have inherent limitations.

Human Review

In certain cases, automated analysis may not be able to determine content characteristics with sufficient confidence.

When this occurs, RealScroll may perform additional review processes. This may include moderation review conducted by trained reviewers.

Human review may be used to determine appropriate platform actions, including applying labels, restricting visibility, removing content, or allowing content to remain visible with transparency indicators.

Deepfakes and Manipulated Content

Content that intentionally impersonates individuals or deceptively alters reality in a harmful or misleading manner may violate RealScroll platform rules.

Examples include:

Deepfakes designed to impersonate individuals
Manipulated media intended to mislead viewers about real world events
Synthetic content used to harass or defraud others
Content designed to falsely represent people or situations

Content that violates these rules may be labeled, restricted, removed, or subject to account enforcement actions.

Explicit or Sensitive Synthetic Content

Some synthetic or manipulated media may also fall under RealScroll’s Explicit Content or Sensitive Content policies.

If media contains graphic or disturbing imagery, the platform may apply additional safeguards including:

Content blur screens
Explicit content warnings
Visibility restrictions
User confirmation prompts before viewing

These safeguards are designed to protect users from unexpected exposure to sensitive material.

User Responsibility

Users are responsible for ensuring that the content they upload complies with applicable laws and RealScroll platform policies.

Users should not upload content that intentionally deceives viewers, impersonates others, spreads manipulated misinformation, or violates the rights of others.

Users remain solely responsible for the content they upload and distribute through the platform.

Platform Responsibility and Limitations

RealScroll provides tools designed to increase transparency around digital media.

However, RealScroll does not guarantee that its detection systems can accurately identify all synthetic or manipulated content.

RealScroll does not certify the authenticity of any user generated content on the platform.

Detection systems, moderation tools, and transparency indicators are provided as informational tools only.

RealScroll is not responsible for user generated content and does not verify the accuracy or authenticity of every post uploaded to the platform.

Commitment to Transparency

RealScroll is committed to improving transparency around digital media and continuing to evolve detection tools as technology advances.

Our goal is to help users better understand the content they see online while maintaining a fair, open, and responsible platform for digital expression.

As artificial intelligence technology continues to develop, RealScroll may update its systems, transparency tools, and policies to reflect new developments.

Policy Updates

RealScroll may update this policy periodically to reflect changes in technology, platform functionality, or applicable law.

Users are encouraged to review this policy periodically for updates.

Contact

If you have questions about RealScroll moderation or transparency practices, you may contact:

team@realscroll.com

For copyright related matters, please contact:

copyright@realscroll.com

Last Updated: March 2026

RealScroll is designed to increase transparency around digital media authenticity. As artificial intelligence generated and manipulated media becomes increasingly common across the internet, RealScroll provides tools that help users better understand the nature of the content they view.

This policy explains how RealScroll approaches artificial intelligence generated media, manipulated content, deepfakes, and authenticity indicators on the platform.

Media Analysis

Photos and videos uploaded to RealScroll may be analyzed using automated detection systems designed to identify indicators of artificial generation or digital manipulation.

These systems analyze certain characteristics within media files to help determine whether content may contain elements associated with synthetic or AI generated media.

Media analysis may occur automatically when content is uploaded or when content is reviewed by platform systems.

These analyses are used to support transparency tools, moderation systems, and informational indicators displayed within the platform.

Detection Technology

To help analyze uploaded media, RealScroll may use automated moderation and detection technologies designed to identify potential indicators of artificial or manipulated content.

These systems may include technologies developed internally as well as third party technologies provided by trusted partners. For example, RealScroll may use analysis tools provided by technology partners such as Sightengine.

Sightengine’s technology analyzes certain characteristics of images and videos to detect signals that may suggest synthetic generation, manipulation, or other forms of artificial media.

Detection systems may evaluate signals such as:

Pixel level manipulation patterns
Visual inconsistencies associated with synthetic generation
Deepfake related indicators
Media artifacts commonly associated with AI generation
Other characteristics associated with altered or generated media

The results of these analyses may be used to generate transparency indicators or moderation signals within the platform.

Transparency Indicators

When media is analyzed by automated systems, RealScroll may provide transparency indicators to help users better understand the characteristics of the content they are viewing.

These indicators may appear within the platform interface and may provide contextual information about the media.

Examples of transparency indicators may include:

Possible AI Generated Media
Synthetic Media Indicators Detected
No AI Indicators Detected
Media Analysis Result

These indicators are intended to provide additional context and transparency regarding uploaded content.

They are informational in nature and should not be interpreted as definitive conclusions regarding the authenticity of any media.

Accuracy and Limitations

Artificial intelligence detection technologies are still evolving and may not always produce perfectly accurate results.

Detection results represent probabilistic analysis based on available signals, training data, and machine learning models.

As a result:

Detection systems may produce false positives
Detection systems may produce false negatives
Detection results may change as detection technology improves
Detection outputs may not reflect the full context of how content was created

Results generated by automated systems should not be interpreted as definitive proof that media is authentic, artificially generated, or manipulated.

RealScroll does not certify, guarantee, or represent the authenticity of user generated content.

Users should evaluate content responsibly and understand that automated detection technologies have inherent limitations.

Human Review

In certain cases, automated analysis may not be able to determine content characteristics with sufficient confidence.

When this occurs, RealScroll may perform additional review processes. This may include moderation review conducted by trained reviewers.

Human review may be used to determine appropriate platform actions, including applying labels, restricting visibility, removing content, or allowing content to remain visible with transparency indicators.

Deepfakes and Manipulated Content

Content that intentionally impersonates individuals or deceptively alters reality in a harmful or misleading manner may violate RealScroll platform rules.

Examples include:

Deepfakes designed to impersonate individuals
Manipulated media intended to mislead viewers about real world events
Synthetic content used to harass or defraud others
Content designed to falsely represent people or situations

Content that violates these rules may be labeled, restricted, removed, or subject to account enforcement actions.

Explicit or Sensitive Synthetic Content

Some synthetic or manipulated media may also fall under RealScroll’s Explicit Content or Sensitive Content policies.

If media contains graphic or disturbing imagery, the platform may apply additional safeguards including:

Content blur screens
Explicit content warnings
Visibility restrictions
User confirmation prompts before viewing

These safeguards are designed to protect users from unexpected exposure to sensitive material.

User Responsibility

Users are responsible for ensuring that the content they upload complies with applicable laws and RealScroll platform policies.

Users should not upload content that intentionally deceives viewers, impersonates others, spreads manipulated misinformation, or violates the rights of others.

Users remain solely responsible for the content they upload and distribute through the platform.

Platform Responsibility and Limitations

RealScroll provides tools designed to increase transparency around digital media.

However, RealScroll does not guarantee that its detection systems can accurately identify all synthetic or manipulated content.

RealScroll does not certify the authenticity of any user generated content on the platform.

Detection systems, moderation tools, and transparency indicators are provided as informational tools only.

RealScroll is not responsible for user generated content and does not verify the accuracy or authenticity of every post uploaded to the platform.

Commitment to Transparency

RealScroll is committed to improving transparency around digital media and continuing to evolve detection tools as technology advances.

Our goal is to help users better understand the content they see online while maintaining a fair, open, and responsible platform for digital expression.

As artificial intelligence technology continues to develop, RealScroll may update its systems, transparency tools, and policies to reflect new developments.

Policy Updates

RealScroll may update this policy periodically to reflect changes in technology, platform functionality, or applicable law.

Users are encouraged to review this policy periodically for updates.

Contact

If you have questions about RealScroll moderation or transparency practices, you may contact:

team@realscroll.com

For copyright related matters, please contact:

copyright@realscroll.com

Trust Your Feed Again

Adding visibility and accountability to modern social media.

Trust Your Feed Again

Adding visibility and accountability to modern social media.