SquareX publishes research on attacks that completely bypass Secure Web Gateways at DEF CON’32. Read More

SquareX Uncovers Critical Vulnerabilities in Top Webmail. Providers. Read More

✨ SquareX has raised a USD 6M seed from Sequoia Capital SEA. Read More

Home / Use cases / Browser & GenAI DLP

Browser & GenAI DLP

With attackers employing increasingly sophisticated phishing techniques, traditional security solutions may fail to detect these threats, potentially leading employees to unwittingly enter sensitive information on phishing pages. Even in such scenarios, it is crucial that employees' credentials and sensitive data are protected from leaking.

To address this, nearly every SASE/SSE provider now includes Data Loss Prevention (DLP) as part of their offerings through CASBs and Secure Web Gateways (SWGs). However, cloud-based proxies typically rely on network traffic analysis, which is stateless and lacks context, resulting in limited visibility into client-side activities. Moreover, SWGs can be easily bypassed through various client-side manipulations within the browser itself.

SquareX offers a more comprehensive solution by integrating DLP capabilities directly into the browser environment, ensuring that even if an employee lands on a phishing page, their credentials and sensitive information remain secure. This approach provides greater visibility and control, effectively mitigating the risk of data leaks and enhancing overall enterprise security.

Preventing Password Reuse Across SaaS Apps

SquareX helps protect against password re-use across various SaaS applications, even when these apps lack Single Sign-On (SSO) support. It can be used to create policies that help maintain unique password requirements for each application, enhancing overall security posture. It also allows organizations to create policies to use newer passwords with stronger strength based on a strength score.

Block clipboard copy into ChatGPT

AI applications such as ChatGPT can be of great help at work; however, employees have to be careful while using the content from AI applications as it may contain inaccuracies or be incomplete. It is crucial to verify the information and ensure that it aligns with the organisation's policies and standards as it can pose security risks and potential breaches of confidentiality. Employees should also be aware of licensing and intellectual property issues, as the AI generated content may have specific usage restrictions. Ensuring proper attribution and compliance with licensing terms is essential to avoid legal complications. Instead of blocking AI applications completely, enterprises can apply granular policies. Using the policy generating copilot, admins can prompt ‘Block clipboard copy into ChatGPT’ to generate the appropriate policy. The expected outcome would be:

Block clipboard paste from ChatGPT

While using AI applications such as ChatGPT, employees must exercise caution when sharing information, as it could be company confidential and may be used by the AI for its own training purposes. To mitigate such risk, enterprises can apply policies to block paste operation on AI applications. Using the policy generating copilot, admins can prompt ‘Block Clipboard Paste from ChatGPT’ to generate the appropriate policy. The expected outcome would be:

Block file uploads to ChatGPT

Employees might find it convenient to upload meeting notes and other documents to ChatGPT and ask it to perform a myriad of actions such as summarise or even change the format of the file. However, this opens the enterprise up for attacks as sensitive data could be leaked to these AI models. Using the policy generating copilot, admins can prompt ‘Block file uploads to ChatGPT’ to generate the appropriate policy. The expected outcome would be: