GuardAPI Logo
GuardAPI

Fix XSS in API Responses in Hanami

XSS in API responses occurs when an endpoint reflects untrusted input without enforcing a strict 'application/json' content type or proper character escaping. In Hanami, if an action defaults to 'text/html' or fails to sanitize output, an attacker can deliver a payload that executes in the context of the victim's session.

The Vulnerable Pattern

module Web::Actions::Search
  class Index
    include Web::Action
def call(params)
  # VULNERABLE: Manual string interpolation and missing format enforcement.
  # If a browser sniffs this as HTML, <script> tags will execute.
  self.body = "{ \"results\": \"Results for #{params[:q]}\" }"
end

end end

The Secure Implementation

The fix involves two layers of defense. First, 'format :json' ensures the 'Content-Type' header is explicitly 'application/json', which prevents most modern browsers from attempting to parse the response as HTML. Second, using 'Hanami::Utils::Json.dump' (or a library like Oj) ensures that special characters like '<', '>', and '&' are correctly encoded within the JSON string, neutralizing any injected scripts. Additionally, ensure your Hanami configuration includes 'X-Content-Type-Options: nosniff' to prevent MIME-type sniffing.

module Web::Actions::Search
  class Index
    include Web::Action
    # SECURE: Force the response format to JSON to set 'Content-Type: application/json'
    format :json
def call(params)
  # SECURE: Use a proper JSON serializer to handle character escaping
  results = { results: "Results for #{params[:q]}" }
  self.body = Hanami::Utils::Json.dump(results)
end

end end

System Alert • ID: 8885
Target: API Responses API
Potential Vulnerability

Your API Responses API might be exposed to XSS

74% of API Responses apps fail this check. Hackers use automated scanners to find this specific flaw. Check your codebase before they do.

RUN FREE SECURITY DIAGNOSTIC
GuardLabs Engine: ONLINE

Free Tier • No Credit Card • Instant Report

Verified by Ghost Labs Security Team

This content is continuously validated by our automated security engine and reviewed by our research team. Ghost Labs analyzes over 500+ vulnerability patterns across 40+ frameworks to provide up-to-date remediation strategies.