Backing Up Postgresql Tables One at a Time

How to backup a single postgresql table with pg_dump

Here are the commands I found to work best in postgresql version 8.4:

commands to copy tables to /home/raid temp
#change dir to tmp:
cd /tmp
#change user to postgres
su postgres
#enter password:
#enter the following sql command replacing did_serial with actual table name:
pg_dump -a -t did_serial -f did_serial.sql maximize_master

 

How to add an favicon image to browswer tab

Adding a Favicon Image to a Website

image of favicon
Example of favicon in browser tab

How to add an favicon image to browser tab

Have you even wondered how a lot of websites include a small icon image in the browser tab, juse before the site title shows up

well, it is very simple really Simply include the following line of code into your sites HTML somewhere within the head tag, before the opening body tag:

<link rel=”icon” type=”image/png” href=”images/lock.png” />

yes, that is really all there is to it! Obviously, you need to prelace the image type and href attributes to accomodate your icon image which should be either 32 px wide by 32 high or 16 x 16.

Tips:
You can easily resize any imaege to be used as a favicon for your site in Gimp which can be downloaded online from

Restoring Postgresql Database with pg_restore

pg_dump restore with pg_restore

In order to restore a postgresql .dump file made with page_dump that was done with the -Fc parameters, as in the following example:

pg_dump -Fc mydb > db.dump

To restore a dump that used the above command, use the following pg_restore command at command line

pg_restore -C -d postgres db.dump

Where C parameter means to create table and d parameter is the database. postgres can be any database name in the cluster. You have to provide another database name even though it gets the actual database name from the dump file. Don’t worry, it won’t rename the database postgres, it just needs to postgres database to create the new database for some reason.

I also wanted to try the -O option because ownership of the tables was all messed up so I tried:

pg_restore -O -C -d postgres db.dump

The reason for the -O option was that when I transferred the first time, the database has users and roles that didn’t exist on the new server, so the -O option makes it owner neutral, which seems to have solved the issue.

Tips and tricks:

  • use the df -h command on a second terminal window to see if the database is loading. On larger databases, like the one I did recently which was a 15 gig backup file, it will hang on the pg_restore command seemingly indefinitely. You can type [df -h] in a second command line interface to monitor progress. Do it once then do it again and compare to see that there is more disc space being used. that is the technique I used to verify that the pg_restore command was working.
  • Errors – When I did this I got an error saying:
    pg_restore.bin: [archiver (db)] Error while PROCESSING TOC:
    pg_restore.bin: [archiver (db)] Error from TOC entry 2672; 2612 4376101 PROCEDURAL LANGUAGE plpgsql postgres
    pg_restore.bin: [archiver (db)] could not execute query: ERROR:  language “plpgsql” already exists
        Command was: CREATE PROCEDURAL LANGUAGE plpgsql;
    but that is just because the database already had the plpsql language in it which is not a big deal and shouldn’t mess up anything, so just ignore the error and patiently wait for pg_restore to finish.
  • Leave the command prompt alone – It seems like I messed up one restore session by messing around with keyboard when the focus was on the terminal window where I did the pg_restore command from, so to avoid the likelihood of this happening, leave the terminal window alone until it returns a new command prompt indicating that it has finished.
  • Missing Data – if something went wtong and some tables are missing data. you can do what I did and do a backup for each table that is messed up. Use the following command to pg_dump just the table you need from the original database server:enter the following sql command replacing did_serial with a table name:
    pg_dump -a -t table_name -f table_name.sql db_name
    replace table_name and db_name with the table and database names respectively. Then download the file to your desktop, open it and take out everything before the the copy command and save it. upload to new server and issue following command:
  • psql db_name < table_name.sqlNOTE: you have to be in the directory of table_name.sql or use the path before it.

Increasing Disc Space of Amazon EC2 Instance

Here are the steps to increasing disc space on an amazon ec2 instance.

NOTE: This tutorial has been updated to reflect recent changes on December 24th, 2016 because the original was written three years earlier. So the guide is up-to-date as of December, 2016 now.

  1. Login to your Amazon account’s management console HERE. Then go to your instances and stop the instance you want to increase the disc space on.
  2. Get the instance id of the server you want to increase disc space on. The instance id will look like this: i-5cdfdd2
  3. In the description tab for the selected instance, find the Volume ID that looks like: vol-00881476. Make a note of that too.
  4. In the same description tab, find the zone which looks like eu-west-1b and make a note of it as well.
  5. Select Volumes in the left navigation pane of the management console.
  6. Locate the Volume ID from step 3 and right click it then select “create snapshot”. Give it a name you will remember later and make a note of it.
  7. Click on “snapshots” in the main left navigation area and select “Create Volume” by right clicking on your new snapshot.
  8. Select the availability zone from step 4 and enter a new disc size. I made mine 110gb since I had a small instance with 160gb available. Check the aws docs to see the max allowed for your instance type.
  9. Click on “Volumes” again in the left column and find your old volume and detach it. When I tried this again in December of 2016, I got an error saying ti cannot detach root volume. I searched Amazon’s rather scattered documentation and found this note:[ NOTE: If an Amazon EBS volume is the root device of an instance, it cannot be detached while the instance is in the ‘running’ state. To detach the root volume, stop the instance first. Also, make sure to unmount any file systems on the device within your operating system before detaching the volume. Failure to unmount file systems, or otherwise properly release the device from use, can result in lost data and will corrupt the file system.] Now I did it without unmounting the file system because I knew this particular filesystem was not currently being used by anyone, but if it is a live site and is active, you definitely SHOULD unmount the filesystems first. All I did was clicked on “instances” in the left navigation, right clicked the running instance and selected “state/stop”.  After that wait for your instance to stop completely before continuing. You’ll see the icon spinning in front of your instance while it is stopping… After it is stopped you may go back to volumes and detach the volume successfully this time. Wait for it to detach…
  10. Find your new volume. Right click the volume and select “Attache Volume” then select the instance ID you noted in step 2. Enter sda1 after last slash. write down the volume id such as vol-0764a479.
  11. Now restart your instance, log in to it from putty or other command line interface and issue the command df -h and you will see your new volume information if everything worked. If disk space is the same continue reading, otherwise you are done.

When you run the df command from the command prompt and still see same disk space like in the following image, you need to increase the volume size from command prompt as I will instruct you how to do below:

df1

as you can see in the above image, i still had 8256952 one time I tried this when I was increasing a micro instance volume size from 8 gib to 16 gib, so I ran the following command from the command prompt:

resize2fs /dev/xvda1

then I ran typed df from command prompt again and got the following results which indicate the larger volume size of 16gib:

df2

Notice the size is now 16513960. By the way, if you run df -h instead of just df at the command prompt, it would say 16gb instead of 16513960. The h parameter puts it in human readable format.

That’s it. That is exactly how I increased my disc space on an Amazon ec2 instance recently. This was a linux instance, but the only difference for a Windows instance would be to replace sda1 with xdvf or something similar in step ten above. Good luck!