These instructions on how to mount an Amazon EBS volume apply to CentOS Linux specifically but with little modification can be applied to all Linux distributions. By attaching EBS volumes (aka. disks) to your instance you can get around the majority file space issues that might encounter when hosting popular websites or those which contain a lot of data (e.g. image galleries, music sites, podcasts, etc.)
- Note down the instance id of the instance you want to add more storage to
- In your AWS account go to the volumes link (under Elastic Block Store) click the Create Volume Button and choose how big you want this device to be.
- Attach the new EBS volume to your instance by right clicking it and choosing Attach Volume. Select the instance id from the list that you noted down in step 1. and give the new device a name to reference it on the instance, e.g. /dev/sdf
- Login into your instance on the command line and do and run (# represents the command prompt):
# ls /dev
You should see that /dev/sdf has been created for you
- Format /dev/sdf by running:
# mkfs.ext3 /dev/sdf
It will warn you that this an entire device. You should type y to allow the process to continue unless you want to create specific partitions on this device
- Create a directory to mount your new drive as on the filesystem, for example we’ll use /files:
# mkdir /files
- Add a reference in the fstab file to mount the newly formatted drive onto the /files directory by running the following command:
# echo "/dev/sdb /files ext3 noatime 0 0" >> /etc/fstab
- Mount the drive by running:
# mount /files
- Check your drive has mounted correctly with the expected amount of file space by running:
# df -h /files
It really is that simple, within a few cli commands you can simply add 1GB to 1TB of storage at the drop of a hat!
Questions? Leave me a comment and I’ll do my best to answer them for you
So, I needed to upgrade MySQL on our development boxes today and I was met by a little surprise from the RPM programâ€¦
Basically it wonâ€™t do an upgrade as the vendor has changed from being MySQL AB to Sun Microsystems, and as a result I have to do a complete uninstall and re-install manuallyâ€¦
Ho hum, I know itâ€™s a small issue and for the best, but itâ€™s still a pain in the ass when something silly such asÂ vendor name change wastes time in what would otherwise be a quick and simple upgrade.
So anyway as Iâ€™m going through it the following might be useful to you if you have to do the same any time soon.
First download all the current MySQL packages you need:
$hell> mkdir mysql-5.1.34
$hell> cd mysql-5.1.34
ThenÂ stop all running MySQL Process:
$hell> /etc/init.d/myst stop
Then find all the MySQL packages you need to remove by running:
$hell> rpm -qa | grep -i '^mysql-'
Then uninstall each e.g.:
$hell> rpm -e MySQL-client-community-5.1.29-0.rhel4
Then re-install all the new ones you just downloaded e.g.:
$hell> rpm -i MySQL-shared-community-5.1.34-0.rhel4.i386.rpm
Then run the MySQL upgrade program to do the final checks and upgrade the MySQL system database if necessary:
$hell> /usr/bin/mysql_upgrade -uroot -p
And thatâ€™s it, all should work nicely again
Remember though, you shouldnâ€™t upgrade between major versions that arenâ€™t in sequence. i.e. Donâ€™t upgrade from MySQL from 4.0 to 5.1 as the additions to the software made in 4.1, 5.0, etc. can be lost by skipping these intermediate upgrades.
I can across a few other funky things in MySQL 5.1 today that I thought it might be worth telling you about in case you ever come across them too.
This time I was partitioning a number of large tables in and initially started to get the same weird errors as I did before when stupid queries were running away with themselves due to lack of temp space.
When you partition a table, MySQL seems to build a partitioned copy of it on the file system before swapping to that table for general use â€“ which seems like a fair way to go, but if you donâ€™t have enough temp space for the new table to be built in you get issues similar to those I discussed here, and you can get round them in the same way.
In my case, when doing this kind of maintenance I now add an â€˜overflowâ€™ path to the tmpdir variable which is basically a dir on part of the local filesystem with a large chunk of free space on it, but that isnâ€™t on the same partition as the MySQL tables themselves.
This allows these operations to spill over when they need to without causing a lot of hassle â€“ be warned though, itâ€™s not generally a good idea to use a fileshare on a NAS for this procedure! I donâ€™t know how well itâ€™d work with a SAN as I donâ€™t have one to play with, but doing it on a NAS will be REALLY slow in most cases and may cause other issues as a result.
But anyway thatâ€™s not the main thing I wanted to talk about.
What I wanted to talk about here is all your databases suddenly apparently vanishing altogether from MySQL after youâ€™ve implemented some a partitioning scheme. I guess the same would apply if you suddenly added a load of new databases/database tables to your MySQL setup too.
At the same time as vanishing databases Iâ€™ve also seen errors where MySQL reports that it cannot open the directory on the file system that a particular database resides in, and other similar filesystem related error messages.
What causes this to happen? Well it appears to be the a combination of the max number of connections, the max files mysql can open and the table cache, which dictates the number of files MySQL can have open at any one time.
Basically what seems to happen is that when table cache gets full, MySQL essentially dies not being able to open any more database files, and therefore cannot access any more information either. BUT because the main operation on the server is not interrupted it seems that the MySQL process doesnâ€™t die - it just continues to run but without access to any information, as if none of it ever existed!
This tends to happen after partitioning has been done because a partitioning scheme can result in a large number of new database data files being produced â€“ after all, thatâ€™s all itâ€™s doing, breaking one massive single myISAM file down into more manageable chunks.
Also another reason it tends to happen after partitioning is because partitioning came in in MySQL 5.1 and in MySQL 5.1 the name of the table cache server variable (at least) changed from
table_open_cache. So, if your my.cnf or my.ini still references table_cache and your running 5.1.3 upwards youâ€™ll not actually be setting this value any more and as a result the server will revert to its default value â€“ namely 64 â€“ a tiny amount.
I have seen recommendations that you should set this value to around to 2048 for a lot of systems (which seems a bit arbitrary really), but the way to determine what kind of numbers you should be using here comes from analysing the results of the opened_files server status variable traded off against the max connections, and whatever the table cache is currently set to - see the resources links below for more info on this â€“ you can access this status variable and a few other useful related parameters by running:
SHOW STATUS LIKE '%open%';
Issues can also ensue when the number of files opened by the server exceeds/hits the limit of the total number of files the operating system allows any one user to open, and if this actually appears to be the case you need to check the docs for your OS in order to up this limit.
To get an idea of the maximum number of tables mysql might look to open at any one time you can count the number of files in the mysql data dir - this dir might also contain other junk like bin logs, error logs, etc. but for a rough upper limit this will do it:
ls -1R | wc -l
If your data dir is polluted with logs then you can filter this kind of a count through grep and add up the results:
ls -1R | grep *.MYI | wc -l
ls -1R | grep *.MYD | wc -l
To find out the number of files mysql has open currently you can also do this:
$shell> lsof | grep mysql | wc â€“l
You can find out the number of max open files your OS supports by running this:
$shell> cat /proc/sys/fs/file-max
Some resources on these issues:
When you have a number of different servers to administer (yes administer - administrate is not a real word!), all across different platforms, switching between different client programs can get very tiresome very quickly.
As a result there are a few programs out there that act as all in one clients for Windows Remote Desktop connections, VNC Connections, SSH, Citrix, etc. These can be REALLY useful in this situation and can save a lot of time and hassle while in some cases reducing the chance of user error when switching between apps.
We worked with a commercial tool, iShadow, for about a year, for this and soon realised its utility but although it was commercial, it was clunky and very very temperamental when it came to storing/loosing passwords and connection profiles. So we set out to find an alternative.
Thankfully, Kelvin, our Technical Manager found MRemote, a free, stable and nice to use client which does the job very well. Yes, it is basically an interface on top of a lot of existing open source client programs which it loads as components, but why re-invent the wheel when these things in their own right work, and work well?
So without raving about it much more, if you manage a load of servers and want to simplify the process somewhat why not give MRemote a go. The only thing I think it's missing, from my point of view, is an interface to the NX Client which I use on some of my machines, and maybe database servers such as MySQL, but aside from that it's fantastic!