Recently S. Lott published a post on what’s a clear definition of Enterprise-level applications. Even though I agree with him that “Enterprise-scale” definition has been streched by marketing to mean about anything, I have to disagree with his conclusions:
- The fact that an enterprise running a mission-critical piece of software can actually survive to bad things(tm) by other means (falling short on their obligations?), doesn’t mean that the mission wasn’t critical, after all. Anyways, mission-critical is just a typical requisite for enterprise applications, not the definition of.
- If the test is “if installer is next-next-done, then it’s not Enterprise”, it can be easily falsified by examples like Oracle database or Oracle business applications, which are definitely into the Enterprise set but are rather easy to install. Obviously you can achieve complex redundancy setups with Oracle DB or MySQL DB, and both of them require special configuration activities not provided by installer.
Now, I think software quality evaluation for this purpose shouldn’t consider single pieces of software by themselves: CPython is just a component, not a complete solution for any Enterprise task, so any analysis on what’s exactly suitable for a task should take into account at least a group of components that compose a framework for task execution. At most, I can say that some languages foster more software quality than others: for instance, Perl’s CPAN repository shows how difficult it is to achieve a common coding standard with Perl.
IMHO, a definition of Enterprise-grade application should take into account the following features:
- Processes should respect the ACID principles: wether your application does financial transactions, message delivery or any other kind of workflow, users need to trust on process reliability. This also means that application and protocol modeling should give clear information on what to expect as outcome of a certain operation.
- Fault tolerance should guarantee that in a closed environment, if prerequisites are respected and maitenance is carried out regularly, the software can accomplish its tasks without interruption, despite technical faults in single components of the underlying hardware.
- Security best practices (crytography, coding standards, risk management, extensive ACLs, etc.) should be applied to minimize unauthorized access to data and the application should support security features of the underlying system. While everyone claims to be secure today, I think that historical tracking, open-sourceness and quickness of response to threats are good ways to measure safety.
- System configuration and development practices should follow standards that allow turnover of human resources and market should offer enough qualified personnel and paid support services to guarantee that once the system is up and running, someone exists that can actually keep it running and possibly extend it. Furthermore, market should offer training services and books to save your internal know-how value.
- New software versions should be released when they’re mature and major releases should keep up with evolution of the market.
- Software architecture should make performance and TCO grow at most linearly and have an upgrade path that can make your infrastructure serve huge numbers (thousands-to-millions of users) in a modular way (cloud computing is an example of this). Profilation and stress testing should ensure that the application has been engineered to avoid failures under peak usage due to performance hogs. Public independent benchmarks can help estimate processing rate, durability and resources consumption variability.
- Considering that most Fortune 500 employ Enterprise-grade applications for production management and business management, Enterprise-grade application should offer more complex problem-solving and environment adaptation features than their small-business or consumer counterparts.
- Documentation and knowledge base should be adequate and up-to-date for most day-to-day needs from users and developers and should match an exact version of the software.
- Usage data collection and analysis features are an important factor to allow management to plan for resource allocation and to take action upon anomalies.
- It should be possible to declare in advance a disaster recovery plan to get back on track in a defined timeframe, given a TEOTWAWKI event or subsets of it.
Okay, my definition is really longer than S. Lott’s, but if it was simpler than that, it wouldn’t be Enterprise
At the current state of software industry, products rarely back up their marketing claims of being Enterprise-ready and application obsolescence has become really high-paced, so CIOs approach is more focused on agility than long-term reliability (how many SaaS products never get out of beta these days?). In reality, companies that make profits out of Agile lifestyle are just building on top of strong Enterprise foundations and are avoiding well-known Bad Agile pitfalls. According to Brad Cox‘s interview in Masterminds of Programming book (see my review):
Why is computer science not a real science?
Each time you encounter a new piece of of software, you encounter something completely new and unique. How can you have a science where everything is unique?
If you study gold or lead from day to day, you can measure the properties and employ scientific methods to study them. With software, there is none of that.
Enterprise-grade, reausable and reliable components may be one strategy to make software industry a real industry. That’s what Brad Cox thinks and I agree with him. What do you think? Please share your comments.