Learn from answers to most common questions
The performance of an HTTP proxy is limited by the performance of the used storage subsystem. The proxy needs to write a new object to disks in case of a Cache-MISS (the object is new and was not stored in the cache yet). In case of a Cache-HIT (the client requested an object that is already stored in the cache) it needs to read the object from the disks to serve it to the client. That means that each HTTP request results into a request to the disk subsystem in the worst case. Caching some parts of the requests in RAM helps to reduce the load to the disks a little bit but does not change the performance dramatically.
Each enterprise SATA disk with 7.2k rpm can only handle approximately 100 requests per second. Faster SAS disks with 15k rpm can handle approximately 200 requests per second. That means that you need 30 SATA or 15 SAS disks to reach a performance of 3000 requests per second.
We use high end SSDs to dramatically increase the performance. SSDs do not have any moving parts and have ultra low access times. High-End SSDs can do fifty thousand and more requests per second. The use of SSDs allows our solution to perform at many more requests per second than any other caching solution on the market.
You should not trust any vendor that tries to tell you that a HTTP proxy system that only uses SATA or SAS disks can do much more than 100-200 request per disk per second. This is just impossible because of the physical limitations of the disks. Disks have moving parts and it takes time to move the head to the requested position on the disk and read or write the object.
When deciding on which solution to use for a transparent caching project, how do you decide on using some ready appliance solution like CacheMARA vs. creating your own solution based on freely available Open Source?
Open source platforms can be great tools for “Do it yourself” (DIY) projects. But our customers have too much at stake to risk their business on building their own transparent caching solution by themselves using the available Open Source software.
Most of the time if you are planning to go through all that work you may be better off just buying the ready-to-deploy solution upfront.
For example you can get a Squid-based cache and throw tons of money at it and make it better, but it will never be like the top cache solution as CacheMARA today and still be classified as a risk-taking.
WCCP is a content-routing protocol developed by Cisco that provides a method to redirect traffic flows in real-time. It features load balancing, fault tolerance, scaling and service-assurance mechanisms. Furthermore it enables transparent caching with a WCCP-compliant router using Layer2 rewriting or IP-GRE encapsulation.
Combining CacheMARA with a WCCP enabled router allows easy and fully transparent caching cluster setups with up to 32 nodes.
If no WCCP enabled router is available, CacheMARA can instead be combined with any router that supports Policy based Routing (PBR). Alternatively it can also be connected with one of our other products or with most of the other available third party load balancers on the market.
In many of our customer scenarios there was no additional investment needed for a special load balancer as CacheMARA could just be connected to the existing routers using PBR or WCCP. On a large scale however, TmcMARA offers a much nicer integration of multiple caches at once due to a more caching-aware loadbalancing.
Besides Data Leakage Prevention, a Web Application Firewall enhances security by means of guarding from common web-based application threats.
In order to detect application-layer attacks (for example Injection, Cross-Site Scripting, Cross-Site Request Forgery or session/cookie-based flaws in general) and prevent them even before they reach the application itself, the HTTP-traffic (layer 7) gets interpreted and monitored. This allows checking requests (or at least responses) for suspicious activity or known weaknesses that have not been fixed by the maintainer so far on the basis of included signatures or custom rulesets.
This global approach as "Single Point of Detection", combined with a fine-grained control, ensures protection of several systems without the need of touching the existing applications and helps meeting the PCI DSS requirements. The use of cookie-encryption improves the protection against several common threats once more significantly.
Unlike normal firewalls, the Web Application Firewall feature allows eMARA to detect malicious but harmless-looking HTTP-Traffic - an SQL-Injection scenario is only one single example under the plenitude of threats where restricting direct database access is not enough.
Often confused with and mistaken for an Intrusion Detection or -Prevention System, a Web Application Firewall provides security tailored for a web application's needs: In contrast to common IDS/IPS, it comprises inspectable HTTPS, authentication with Single Sign-On, session hijacking etc. protection, request/response manipulation, application-based logging/reporting, and many more HTTP-layer features.