Securely Webifying Applications

Paladion
By Paladion

October 25, 2006

We see a recurring pattern of security errors when organizations migrate their legacy applications to the web. This Executive Briefing documents the most common security mistakes we have seen in the last 5 years.

We see a recurring pattern of security errors when organizations migrate their legacy applications to the web. This Executive Briefing documents the most common security mistakes we have seen in the last 5 years.

 

At a glance: The Most Common Mistakes

  1. Different apps, different security policies
  2. No security design
  3. Over-reliance on infrastructure security
  4. Zero security training for developers
  5. Insecure Single Sign On implementations
  6. Test just before the launch
  7. Undocumented application backdoors
  8. Weak logging/audit trails
  9. From hot fixes to "flaming" fixes

1. Different apps, different security policies

Some organizations forego a uniform security policy for web applications. This leads to inconsistent strategies – and often weak strategies - for defending their applications. A well thought-out and clearly documented security policy gives direction. And after migrating the first few applications, the organization learns how to implement the policy efficiently. Our experience shows that a useful, organization-wide, web application security policy covers at least these areas:

  1. Security design and architecture
  2. Security training for developers
  3. Passwords: complexity, storage, recovery, reset
  4. Encryption: algorithms, storage transmission and management of secret keys
  5. Caching
  6. Auditing and logging
  7. User management
  8. Security testing

2. No security design

Even organizations that adopt a uniform security policy frequently ignore the need for a documented security design to migrate their applications to the web. A coherent design helps developers avoid common mistakes. For instance, an application designed with a secure database access strategy (eg. only parameterized queries are allowed) is less likely to be vulnerable to SQL Injection than one where developers each apply their own techniques to implement the security policy. Here’re the areas that every security design should describe explicitly:

  1. Input validation
  2. Authentication
  3. Authorizations
  4. Business logic enforcement
  5. Database access
  6. Key management
  7. Integration with infrastructure security
  8. Page caching
  9. Exception handling
  10. Auditing and logging

3. Over-reliance on infrastructure security

Traditional perimeter security emphasized infrastructure security – firewalls, IPS, VPNs – to safeguard legacy, two-tier applications. These primarily protect the operating system and generic servers – mail servers, web servers etc. Perimeter security is still relevant today; however, it’s only a part of the picture when the application gets a web front-end. When the application is migrated to the web and opened up to wider access, it’s exposed to a wider range of threats from a larger population of adversaries. Infrastructure security is blind to many of the attacks they use. For instance, attacks like Variable Manipulation, Cross Site Scripting, and SQL Injection pass unhindered through firewalls, IPS, VPNs etc. A good security design relies on both application layer security – good input validation, strong authentication etc – and good infrastructure security.

4. Zero security training for developers

Surprisingly, organizations expect their developers to write secure web applications without the least bit of training. Legacy applications worked in a less hostile environment and developers needed to worry less about security then. When their code contained holes – and it did then too – the application was not punished. But in today’s more hostile web environment, they must be trained to build secure applications. In our penetration tests and code reviews, apps developed by trained developers are significantly more secure than those written by their untrained brethren. Even a 1-day training that covers these basics help the cause of security well:

  1. HTTP essentials
  2. How to validate inputs correctly
  3. How to chose the right encryption algorithms
  4. How to retrieve and persist data safely
  5. How to authenticate a user and authorize an action

5. Insecure Single Sign On implementations

As more and more legacy applications get a web front end, organizations solve the multiple login problem with Single Sign On (SSO). Almost every SSO roll out we have witnessed has had initial implementation flaws that compromised the application. Fortunately, they were caught during penetration testing and fixed. But the recurring pattern is very worrying. Implementers tell us that SSO roll outs are often so complex that these security errors are overlooked. When they are finally discovered, it’s embarrassment on a large scale - a security solution that itself opens up a gaping hole. Our recommendation is to test every SSO implementation thoroughly – both the initial roll out and when each new application is added.

6. Test just before the launch

Frequently, an application is security tested just before it is launched. The reason’s understandable: “We’ll test the application once it’s fully ready so there won’t be any more new features after the app is tested.” And since applications get ready in the last minute, security testing is done when the countdown to release begins. Bad practice. Every application owner gets a shock if the last minute security test reveals critical holes. Remember that most applications reveal critical security holes when they are first tested, for many of the reasons already noted. And if this first test is performed very close to launch, there is hardly enough time to fix and re-test. Either the launch is delayed, or it’s launched with the hole. Adversaries also know that the first few weeks after an application is released is the best time to pick for holes. The solution: test early, and then re-test before launch. That safeguards you from rude shocks just before launch.

7. Undocumented application backdoors

Undocumented functions are not new to software. At best they were amusing (remember Easter eggs?), at worst they were a nuisance. Most likely they were used to gain more out of the software when a hacker looking for undocumented features discovered it. On the web, however, undocumented functions are a potential backdoor to the application. Our code reviews have been regularly revealing backdoors in custom applications – usually, they are inadvertently placed in the code as a shortcut by an overzealous developer who then forgets to remove it. Dire consequences result when an adversary spots the backdoor. A strong security policy and developers trained in security are the first steps. For the more critical services you are migrating to the web, insist on a security code review.

8. Weak logging/audit trails

Traditional two tier apps that ran within the confines of an organization expected few attacks on them. On the web, they are the target of both automated bots as well as intelligent human adversaries. During many a migration, logging and audit trails have not been upgraded to match the more hostile new environment. Identify all suspicious events at design time, and ensure they are captured in the audit trail. These event include multiple failed login attempts, failed transactions, traffic surge from a single IP, etc. And once these events are identified, ensure they are logged with adequate detail: the IP address, timestamp, login id used, the content of the HTTP request. After an incident, it’s these details that provide a breakthrough for the forensic analysis.

9. From hot fixes to “flaming” fixes

A few months ago, I watched amazed as a developer troubleshot a bug and fixed the code - to an online banking application in production. In the heat of the moment, due processes were kept aside and this “flaming” hot fix was applied directly. In the old days when every client had to be patched, a flaming fix was unlikely. Patches were tested – not because there was time, there never was – but, because rolling back a patch was very expensive. On the web, it’s easy to apply and reverse a patch. So the temptation to apply fixes directly to production is very strong. It’s obviously insecure. If this flaming hot fix is itself a security patch, it has the potential to widen the hole when a developer makes changes under pressure. Watch out against a culture of flaming fixes when you migrate your apps online.

Related reading from Palisade archives


Tags: Best Practices

About

Paladion