Skip to main content
Titan OS offers accessibility features designed to help you create inclusive applications for Smart TVs. This guide will walk you through implementing Text-to-Speech (TTS) and Text Magnification (TM) across Titan OS devices.

Core Accessibility features

Titan OS provides the following features for enhancing accessibility:

Text-to-Speech (TTS)

Converts on-screen text into spoken audio, enabling navigation for visually impaired users. In the Titan OS ecosystem, the implementation strategy depends on the device brand:
  • Philips Devices: Integration though Titan SDK. The application must explicitly invoke the Titan SDK’s startSpeaking() function to trigger speech.
  • JVC Devices: Integration is Native. The device uses an automatic screen reader that interprets standard WAI-ARIA attributes (e.g., aria-label, role). Developers should rely on standard web accessibility practices rather than SDK methods for these devices.

Text Magnification (TM)

Text Magnification (TM) allows users to increase font size and contrast for better readability. Unlike TTS, this feature is consistently managed through the Titan SDK across all devices. When a user enables magnification in the OS system settings, the SDK exposes this preference to your application. You are responsible for detecting this property and programmatically adjusting your UI’s text scaling to match the requested size.

Getting Started

This section provides practical steps and code snippets to begin integrating accessibility features into your TitanOS application.

Basic TTS and TM Setup using the Titan SDK

Before using TTS or TM, it’s essential to check if the feature is supported by the device and if the user has enabled it in the TV’s operating system settings. The getTTSSettings() function (and getTMSettings()) will return an enabled property indicating the user’s preference in the TV’s home screen settings.
  • Checking Support & User Settings:
    async function checkAccessibilitySupport() {
        try {
            const { accessibility } = titanSDK;
    
            const ttsSupported = await accessibility.isTTSSupported();
            const tmSupported = await accessibility.isTextMagnificationSupported();
    
            console.log(`Text-to-Speech Supported: ${ttsSupported}`);
            console.log(`Text Magnification Supported: ${tmSupported}`);
    
            if (ttsSupported) {
                const ttsSettings = await accessibility.getTTSSettings();
                console.log(`TTS is enabled by user: ${ttsSettings.enabled}`);
                // ttsSettings will return an object like: {"enabled": false} if disabled
            }
    
            if (tmSupported) {
                const tmSettings = await accessibility.getTMSettings();
                console.log(`Text Magnification is enabled by user: ${tmSettings.enabled}, scale: ${tmSettings.scale}`);
            }
        } catch (error) {
            console.error("Error checking accessibility support:", error);
        }
    }
    
    // Call this function early in your app's lifecycle
    checkAccessibilitySupport();
    

Programmatic Text-to-Speech Control

The Titan SDK’s startSpeaking() function provides direct control over the TV’s speech output. It does not automatically read content from your DOM elements or ARIA attributes. As the developer, you are responsible for extracting the specific text you wish to be spoken and passing it as a string (or an array of strings) to this function. For example, you might extract text from a focused element’s textContent, innerText, or its aria-label attribute.
  • Initiating Speech:
    const speakElementText = asyn elementId => {
        try {
            const { accessibility } = titanSDK;
    
            // Ensure TTS is supported and enabled before attempting to speak
            const ttsSupported = await accessibility.isTTSSupported();
            const ttsSettings = await accessibility.getTTSSettings();
    
            if (!ttsSupported || !ttsSettings.enabled) {
                console.warn("TTS is not supported or not enabled by the user.");
                return;
            }
    
            const element = document.getElementById(elementId);
            if (element) {
                // Example: Prioritize aria-label, fallback to textContent
                const textToSpeak = element.getAttribute('aria-label') || element.textContent || element.innerText;
                if (textToSpeak) {
                    await accessibility.startSpeaking(textToSpeak);
                    console.log(`Speaking: "${textToSpeak}"`);
                } else {
                    console.warn("No text found to speak for element:", elementId);
                }
            }
        } catch (error) {
            console.error("Error speaking text:", error);
        }
    }
    
    // Example usage (assuming an HTML element with id="myButton" or similar)
    // document.getElementById('myButton').addEventListener('focus', () => speakElementText('myButton'));
    
Your GitHub examples provide more complete implementations: Note: It’s important to mention that, as explained at the Text-to-Speech (TTS) topic, this feature is only available for Philips devices. In the next topic you will get more information on how to enable and disable it depending on the device brand. If your app is on Philips devices, the the implementation using the Titan SDK should be enouth.
  • Stopping Speech:
    import { accessibility } from '@titanos/sdk'; // Assuming 'sdk' instance is available
    
    // Call this to immediately stop any ongoing speech
    accessibility.stopSpeaking();
    

TTS Implementation across device brands (Brand Check)

As mentioned, JVC devices utilize a native accessibility reader that automatically announces focused elements based on ARIA attributes. Invoking startSpeaking() on these devices may result in conflicting audio or redundant speech. Conversely, Philips devices require the SDK to trigger speech. To handle this, you should check the device brand during initialization and conditionally execute the TTS logic.
  • Implementation:
    // 1. Initialize logic variable (you can use your prefered state management)
    let useSDKTTS = true;
    
    async function setupAccessibilityLogic() {
        try {
            const { deviceInfo, accessibility } = titanSDK; 
    
            // A. Check Device Brand
            const info = await deviceInfo.getDeviceInfo();
            const brand = info.Channel?.brand || ''; 
    
            console.log(`Device Brand detected: ${brand}`);
    
            // If JVC, we disable the SDK manual calls to avoid conflict with Native Reader
            if (brand && brand.toUpperCase().includes('JVC')) {
                useSDKTTS = false;
                return;
            }
    
            // B. If not JVC (e.g., Philips), check if User enabled TTS in OS
            const isSupported = await accessibility.isTTSSupported();
            const settings = await accessibility.getTTSSettings();
    
            // Check if supported AND if the user actually enabled it in settings
            if (!isSupported || !settings || !settings.enabled) {
                useSDKTTS = false;
                return;
            }
    
            // Only if brand is valid AND user enabled TTS, we use the SDK
            useSDKTTS = true;
    
        } catch (error) {
            console.warn("Could not determine device brand or settings, defaulting to safe mode (SDK TTS off)", error);
            useSDKTTS = false;
        }
    }
    
    // 2. Use the variable in your focus handler
    async function handleFocus(event) {
        // Only call startSpeaking if logic determined we should use SDK
        if (useSDKTTS) {
            const { accessibility } = titanSDK;
    
            await accessibility.stopSpeaking(); // Clear previous speech
    
            const textToSpeak = event.target.getAttribute('aria-label');
            if (textToSpeak) {
                await accessibility.startSpeaking(textToSpeak);
            }
        }
        // If useSDKTTS is false (JVC), the TV's native reader handles the ARIA label automatically.
    }
    
    // Call setup once on app load
    setupAccessibilityLogic();              
    
    Note: As mentioned previously, this approach of detecting which device you’re running in depends on which devices your app is published. If your app is only on JVC devices, you should use the automatic Screen Reader. If it’s only published on Philips devices, only the SDK is needed. If it’s published on both devices, you will need this device check.

Responding to User Preferences

Users can enable or disable accessibility features from the TitanOS home screen settings. Your application should listen for these changes and adapt its behavior accordingly.
  • Monitoring Accessibility Settings:
    const { accessibility } = titanSDK;
    
    // Listen for TTS setting changes
    const unsubscribeTTS = accessibility.onTTSSettingsChange((settings) => {
        console.log('TTS settings changed:', settings);
        if (settings.enabled) {
            console.log('TTS was enabled by the user in OS settings.');
            // Optionally enable your accessibility reader here
            // accessibility.enableReader({ verbosity: 'standard' });
        } else {
            console.log('TTS was disabled by the user in OS settings.');
            // Optionally disable your accessibility reader here
            // accessibility.disableReader();
        }
    });
    
    // Listen for Text Magnification setting changes
    const unsubscribeTM = accessibility.onTMSettingsChange((settings) => {
        console.log('Text Magnification settings changed:', settings);
        if (settings.enabled) {
            console.log(`Text Magnification enabled: Scale ${settings.scale}x.`);
            // Apply UI changes for larger text (e.g., adjust document.documentElement.style.fontSize)
            document.documentElement.style.fontSize = `${settings.scale}em`;
        } else {
            console.log('Text Magnification disabled.');
            // Reset text size
            document.documentElement.style.fontSize = '';
        }
    });
    
    // Remember to unsubscribe from listeners when they are no longer needed
    // unsubscribeTTS();
    // unsubscribeTM();
    

Testing Your Accessible App

Thorough testing is vital to ensure your app is truly accessible.
  • Manual Testing with Remote Control:
    • Navigate your entire application using only the TV remote’s D-Pad (directional buttons) and the OK/Enter button.
    • Ensure every interactive element (buttons, links, inputs) is reachable and highlightable.
    • Verify that when the Accessibility Reader is enabled, all relevant elements are announced correctly as focus moves.
    • Test dynamic content updates (e.g., loading messages, form errors) to ensure they are spoken.
  • Live Testing on TV:
    • Test directly on Philips 2025 and JVC 2025 TV models with Accessibility settings (TTS, Text Magnification) enabled/disabled via the OS home screen. This provides the most accurate user experience.

Real-World Examples & Resources

To see these features in action and understand more about the integration, explore these examples on GitHub: We are committed to receiving feedback and continuously improving our documentation and examples.

Next steps