I always see recommendations to avoid EFS when running webservers, when it looks like a much better solution than copying documents back and forth to s3. Our public and private web servers have always been particularly low volume, so I never really noticed unacceptable lag. If our pages load in a couple of seconds, not a problem. However, over the last couple of weeks our web server has been performing dog slow. At first I thought is was an apache 2.4 tuning problem, and I was wracking my brain trying different KeepAlive and MinServer directive values, to no avail. I also suspected a plugin problem (when in WordPress, beware the plugin) but it wasn’t until I did a du -f -d 1 to see how much space the plugins were taking up (in case one had blown up) and it took forever to complete the command. Ah ha!
The original EFS mount was set to ‘burst’ mode, so clearly we were exhausting the throughput with just a couple of servers. I set the mount to ‘provisioned’ at 10MiB, and that solved the latency problem. It will cost us about $60 a month for 10MiB (cheap for you all, but that’s real money for us charter school people), but now when I stress test the web server with Jmeter and 400 simultaneous users, the system barely notices, and scales out accordingly. The EFS mount can scale to 1024 MiB, so I imagine that can be pretty beefy for a large scaled service. Now I can go get some sleep!