The Security Benefits and Risks of AI for SAP

AI’s use for cybersecurity offers many benefits, from automation to predictive analysis. This new intelligent horsepower helps organizations analyze vast amounts of data, identify patterns and anomalies, and detect potential security incidents quicker than humans can.

SAP is no stranger to harnessing the power of AI and has been leveraging it for intelligent automation, advanced analytics, supply chain automation, and application security. The use of AI runs deeply into SAP’s finance, supply chain, procurement, human resources, and sales business applications. According to the SAP AI Ethics Handbook, the company uses learning-based AI and says, “…systems are differentiating themselves by the fact that humans define the problem and the goal, but the behavior, rules, and relationships required for the system are learned in an automatized way. With the help of data, they train how to solve a problem and continuously adapt their function in this process.”

Trusting AI for SAP

As AI technology advances, it presents new challenges and risks in cybersecurity. As with any technology, AI has the potential to be abused; for example, AI can automate phishing attacks, making them more convincing and difficult to detect, and generate malware capable of evading traditional cybersecurity defenses.

One of the significant risks of AI in the cybersecurity threat landscape is the uncertainty of its outcomes. AI systems make decisions based on the data and algorithms they are trained on, and humans do not always verify the results. This process can create a situation where humans need to blindly trust AI without fully understanding how or why it made a certain decision. This level of trust is a fundamental conflict with the widely adopted zero-trust strategies in cybersecurity. Zero-trust strategies emphasize the importance of verifying and authenticating every (access) request, regardless of the source. As AI matures, verifying the results generated will become more challenging.

SAP takes this level of trust one step further by defining high-risk cases, e.g., thou shalt not be touched by AI. The company’s guidelines include processing personal data (relating to a natural person) and sensitive data (relating to sexual orientation, religion, or biometric data such as face imaging or voice recognition).

Protecting Against the Risks of AI in Cybersecurity

Organizations must exercise caution and maintain high skepticism when interacting with AI systems. The challenge of detecting malicious AI systems will only grow as AI advances and become increasingly connected and intelligent. Enterprises are witnessing nefarious AI activities in the form of phishing attempts. Some hacker-fueled, AI-generated emails can simulate employee responses because they have the same tone of voice as the company’s HR department. But when clicked, these emails start embedding malicious code that could take months to discover its impact. 

There are challenges associated with using AI in cybersecurity. One of the most significant challenges is that AI systems need more transparency and explainability. AI systems analyze vast amounts of data and make decisions based on patterns that can be difficult to understand.

Related:   Video Byte: Security Service Edge

This lack of transparency can be a problem for organizations that need to comply with regulations and standards that require them to be able to explain their security decisions. For example, under the European Union’s General Data Protection Regulation (GDPR), organizations must be able to explain the logic behind automated decisions affecting individuals.

It’s comforting to know that SAP has specifically called out transparency within their Guiding Principles for Artificial Intelligence. The company states, “Our systems are held to specific standards per their level of technical ability and intended usage. Their input, capabilities, intended purpose, and limitations will be communicated clearly to our customers, and we provide means for oversight and control by customers and users. They are, and will always remain, in control of the deployment of our products.” These forward-thinking principles will comfort customers with concerns about embracing AI-infused technologies.

Another approach to help organizations alleviate their AI fears is to adopt a framework for ethical practices. The European Commission’s Ethics Guidelines for Trustworthy AI emphasize the importance of transparency, accountability, and human oversight in developing and deploying AI systems.

Conclusion

Integrating AI into SAP and its application into cybersecurity brings benefits and a caution flag. AI simplifies many aspects of analysis and processes, and SAP recognizes this value by implementing it across various business applications.

However, full reliance on AI without checks, balances, and transparency introduces risks and challenges. Simply put: blind trust in AI decision-making contradicts the principle of verifying and authenticating data. Organizations must balance AI’s pattern recognition with a human check. “If business people aren’t educated and confident, they get crowded out of the room because it’s hard for them to argue with the technical folks who build the algorithms,” says Susan Athey, Economics of Technology Professor at Stanford Graduate School of Business.

Following suit with SAP, all companies must adopt ethical AI practices and frameworks to ensure trustworthy AI. Combining human scrutiny and AI applications will allow companies to manage cybersecurity risks and provide trustworthy data outcomes.

CEO and Co-Founder at  | Website

Christoph Nagy has 20 years of working experience within the SAP industry. He has utilized this knowledge as a founding member and CEO at SecurityBridge–a global SAP security provider, serving many of the world's leading brands and now operating in the U.S. Through his efforts, the SecurityBridge Platform for SAP has become renowned as a strategic security solution for automated analysis of SAP security settings, and detection of cyber-attacks in real-time. Prior to SecurityBridge, Nagy applied his skills as a SAP technology consultant at Adidas and Audi.

Leave a Reply

Your email address will not be published. Required fields are marked *