System Administrator's Guide/Scalability
Mahara has been designed to be scaled. It can, and has, been deployed to a clustered environment. Furthermore, it is possible to use reverse proxying, HTTP caching, PHP accelerators and other such techniques to improve the performance and scalability of a Mahara installation.
Note that in order to obtain the most scalable installation, you may need specialist advice and experience, that can be provided by some Mahara Partners.
How Should I Scale my Mahara?
If you already have an existing Mahara, and you are seeing scalability/performance issues, you may want to follow the instructions for tuning or replacing the software you are already using. Some changes may provide "easy wins" for you, scalability/performance wise.
If you're setting up a new Mahara, have a look at the Hardware & Clustering section first. Getting the hardware architecture right now will save you a load of hassle in future.
Hardware & Clustering
In order to know how much hardware you will need, you must consider how many users your system will need to support - both in total, and concurrently.
- The total number of users, is how many user accounts you think the system will need to support. You may not know this exactly, which is OK. Consider how many accounts you will have in a few years time - when it will be time to replace the hardware.
- Concurrent users is defined for our purposes as the number of PHP scripts that need to execute per second to handle the maximum load. This roughly equates to the number of pages loaded per second, but isn't quite this, as thumbnails and AJAX requests are handled by PHP scripts too.
Here is an example. Hutt Valley High school has around 2,000 students in any given school year. As they are trialling Mahara, they're going to start with a small number of students, but eventually move on to having most of the children and staff use it. After three years, they expect somewhere between 4 to 5,000 accounts in total to be on the system.
The maximum number of concurrent users is a little more tricky. Perhaps the busiest times for the system will be when a teacher has taken their class to the library, and all the students are clicking around the site. You can expect up to 30 users at these times, so adding a little more as a buffer, and a little more to handle that sometimes many thumbnails will be being requested, let's say the maximum number of concurrent users is 40.
Once you have these numbers, you can start thinking about the hardware you'll need.
How Much Hardware do I Need?
This is a difficult question to answer, but is largely tied to the maximum number of concurrent users you will need to support (see above). You'll also need to consider how much disk space you'll need to hold the files for all the user accounts that will be on your system.
In general, a web server can process requests limited by two factors:
- CPU: Pages take CPU time to generate. How much depends on the CPU, but more cores is better, as is a faster clock speed. You may be able to coax up to 20 requests per core per second out of a modern CPU - though it's safer to bet on a smaller number.
- RAM: Most pages require somewhere between 8 and 32M of RAM to generate. Mahara raises the PHP memory limit to 128M on initialisation. Assuming you set MaxRequestsPerChild to something moderate (say 1000, as the default is), then you would be safe in assuming you need 32M of RAM per request.
For the Hutt Valley High School example, one modern server could amply handle 40 requests a second, assuming a reverse proxy was used. An entry level, quad core server with 2G of RAM would be more than sufficient. If a reverse proxy could not be used, then more RAM would be required.
What About Disk Space?
Calculating the amount of disk required is somewhat of a black art. The main guiding principles are thus: You'll always need less than you think, and you'll always need more than you think. Confusing? Read on:
Firstly, you'll always need less than you think, because not all of your students will use their quotas. Even if Hutt Valley High School gave all of their 5,000 expected users 1 gigabyte of storage, you are not going to need 5 terabytes of space. In fact, you can quite often massively oversell the amount of space required, especially initially. If uptake is expected to be slow, you could get away with as little as 100G of storage to begin with.
Note that we're only discussing the quota. This is because the amount of data that users store will dwarf the size of the database significantly. You can almost discount its size in your calculations.
Secondly, you'll always need more than you think, because having read the first point you may have gone out and arranged for 100G of storage, but then as your Mahara installation takes off and users start uploading heaps of content (especially those art students), you'll find the space disappears much quicker than you anticipated.
The key to solving this problem? You must set up your file storage in such a way that you can easily add to it in future.
However you solve this problem - lvm, zpools, a SAN - it doesn't matter. But if you don't know how you're going to increase the amount of available disk space to Mahara's dataroot, then when it runs out you'll be in all sorts of trouble. So make sure you know how you'll do it - then go out and be stingy on the amount of disk you buy. At least initially :)
The other problem you should solve is the one of working out when you'll need to add more storage. If you can, get the amount of used space monitored, and check it to make sure it's growing at a manageable rate. In particular, if you can graph the dataroot usage over time, you'll be able to plan how much disk you'll need and get it installed long before it becomes an issue.
If your system requires a large number of concurrent users, you'll need to consider deploying Mahara in a clustered fashion.
Mahara is able to be installed such that the web servers, file server (if necessary) and database server can be on separate machines. If you can, moving the database server to be on its own machine is a great first step towards having a scalable system.
Mahara does not yet have separate read and write handles for database activity, which means that you won't be able to set up a multi-master database configuration. However, using software like pgpool can allow you to have a master/slave arrangement if you need this level of scalability.
Once the database server is on its own machine, the next most useful way to scale is by adding more web servers. As long as they're all serving the same Mahara code, pointing at the same database and using the same dataroot directory (this can be arranged by using NFS), you can put a load balancer in front of them and Mahara will run fine.
Normally, under such an arrangement, you would either have the dataroot directory stored on one of the web servers, or on the database server. The other web servers would use NFS to mount it, so it appeared like a local directory to them. However, if this causes too many disk-related performance issues, you may want to consider getting a file server, or using a SAN. This is also relevant if you will want users to have a lot of storage quota - having a separate machine or SAN means you can deal with the amount of disk space required independently of the other servers.
- Using and tuning a reverse proxy
- Tuning the web server
- Apache settings
- PHP settings/accelerators
While your mahara system may run flawless with very little resources, for certain tasks you probably need to make some adjustments, especially when you are hosting in a virtual environment and you're on a shoestring budget. Your hosting company may have made changes to php settings to accommodate their needs affecting the performance or even proper functioning of your system. Exporting a larger portfolio may not work for example. Some users may want to make it a habit to periodically export their portfolio for personal backup, so be ready to have that working. The following tweaks in php.ini were necessary to make a 200 plus page portfolio export:
- max_execution_time = 180
- max_input_time = 180
- memory_limit 512M
Symptons for this were a memory allocation error and a hanging export status bar.
- Tuning the database