Since the introduction of cloud computing it has been touted as a “more secure” option than local servers. As the number of data breaches in the cloud grow, however, that may no longer be the case. Just as in all cases of security failures, human error is at the core.

The data leaks at game developer CapCom or of  IT-Security vendor FireEye in 2020 could have been avoided if system administrators had followed fundamental rules like patch management, monitoring or access control. That, combined with statutory data protection requirements, and any of a large number of individual measures can protect against data loss, service failure or unauthorized access in cloud services.

How does cloud security work?

Providers are generally prepared to defend their business and include user authentication, encryption and monitoring systems. What are less planned are processes for data recovery if all or part of the cloud cannot be reached. That much is obvious in the Colonial Pipeline breach.

While Colonial had uninfected backups at the ready, they still paid the ransom for the data because their backups did not include several key pieces of data to continue their business. While the breach only stopped operations for a few days, they admitted that it would take weeks before the encrypted stolen data could be restored.

Moreover, while Colonial’s networks operate in a private cloud system (ordinarily highly secure), according to the most recent reports the network was accessed through a single computer that had been hacked by the ransomware gang. Once again, a user didn’t follow procedure and that one weak link led to the breach

When cloud users are in public cloud (like AWS) or a hybrid of multiple cloud providers and private servers, there are services and measures for a wide variety of eventualities in the service agreement between the provider and users. That makes users also responsible for ensuring the security of the cloud.

Measures to increase your cloud security

There are very basic practices for both users and providers to lessen the impact of human error. Let’s start with patch management.

Cyber criminals gain access to systems or databases through security vulnerabilities.

If data is manipulated or stolen in this way, this can particularly affect payment data, passwords, etc. have serious consequences. Hackers use special search engines such as Shodan, which continuously searches the entire public Internet and records vulnerabilities. If a user hasn’t updated their service, criminals will know it easily. Procrastination is the enemy here. Close vulnerabilities as quickly as possible.

Multi-factor authentication (MFA) provides a further layer of protection over the name-password pair and makes it harder for attackers to penetrate. MFA should be used when accessing management consoles, dashboards and privileged accounts.

Cloud monitoring uses manual and automated tools for monitoring, analyzing and reporting regarding the availability and performance of websites, servers, applications and other cloud infrastructure. Administrators can test an application for speed, functionality and reliability and make sure that it runs optimally while viewing customer flow, cloud metrics and log data.

A common mistake in access control is to allow Secure Shell (SSH) connections (port 22) directly from the Internet. Anyone who finds out the server location can bypass the firewall and access the data directly. Many administrators accidentally allow global permissions on servers by using the 0.0.0.0/0 port in use the public subnets. This keeps the connection open and allows every machine to connect. Only load balancers and jump hosts should be publicly accessible from the Internet.

Identity and Access Management (IAM) monitors the current status of each account, and must be available in order to avoid irregularities or account misuse at an early stage password management, multi-factor authentication and other methods. Regular changes to the API access keys used are also part of the safeguarding as well as a functioning management process for to cull and add user accounts.

Many of the vulnerabilities of cloud systems can be traced back to incorrect configuration, which can pose a security risk for your data. This can be prevented by observing and complying with the rules. Storage buckets for your cloud system should always be blocked in order to take away the possibility of viewing cloud content via the bucket URL. Not to be forgotten are the standard setting and default passwords of the systems and tools to make it difficult for unauthorized access to your system.

Security is an individual responsibility

Commercial or private users storing their data in a cloud service, after choosing a suitable environment and the services, should think about:

  • Physical access
  • Data encryption
  • Hardware and software maintenance
  • Legal regulations, by country, on the location of data storage.

Before purchasing the services, know exactly what you want from the cloud providers to ensure that your requirements are all contracted. The more deviations or inaccuracies there are in your requirements, the higher the risk when concluding a contract not even receiving required services such as support. Ideally, a provider can even meet individual requirements. These should be recorded in the SLA (Service Level Agreement) of your cloud provider.

Michael Cichosz has been working in various IT security areas for over 10 years.
Since 2008 as malware researcher for the open source project ClamAV under Sourcefire, for local companies Mr. Cichosz was responsible for the area
Security testing and cosulting activities.
In addition to publications of various articles and an IT security book, Mr. Cichosz is with his current employer WOBCOM GmbH a small ISP in Wolfsburg/Germany
On-site technician for IoT / LoraWAN devices, specialist for data center operations and various IT security-related areas.

Leave a Reply

Your email address will not be published. Required fields are marked *