Discussion:
VM sys user time ballooning
Patrick Connolly
2010-11-18 08:10:18 UTC
Permalink
I use a CentOS VM (VMware) as a guest on a Windows XP host machine
(strange as that might appear). A lot of the time it works quite
well, but some things are rather less than satisfactory.

Untarring a 100 Mb file (not zipped) can take more than a minute
whereas the same task will take less than a second on a 'real' machine
of the same hardware.

I'm not sure if it's related, but the other thing is that many tasks
will show a lot of 'user' time and hardly any 'sys' time and of course
take a very long time to do not very much. According to top, a
particular process might show 90% use on both CPUs but nothing is
being achieved for a long time. To make an automotive analogy, it's
like having the engine revving but the clutch is slipping. There's no
obvious pattern to when that happens, but there is a tendency for it
to happen more after the VM has been started and it will settle down
and behave properly afterwards. But it can go feral at any time.

I suspect that most of the problem is that the VM is not set up to use
a Linux filesystem. From what I can work out, VMware defaults to
using the host's filesystem for guests. I am wondering if it's worth
putting effort into getting the reluctant help people to modify it so
that the diskspace reserved for the VM uses uses a proper Linux
filesystem.

It's my guess that the filesystem type can't be changed without
reinstalling the guest and if it's not likely to make as much
difference as I hope it will, it's not likely I'll get anyone
interested in doing such a job.

Has anyone experience with such a system and have any suggestions?

TIA
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes from bad judgment
(_)-(_)


_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
David Pando
2010-11-18 09:08:58 UTC
Permalink
Post by Patrick Connolly
I use a CentOS VM (VMware) as a guest on a Windows XP host machine
(strange as that might appear). A lot of the time it works quite
well, but some things are rather less than satisfactory.
Can't say for VMWare, but CentOS running as a guest in Virtual Box has a
nasty bug that makes the CPU going trough the roof even when idle under
certain conditions. The problem may be circumvented passing some parameters
to the kernel or running another VM concurrently.

http://serverfault.com/questions/35918/high-cpu-usage-when-running-a-centos-guest-in-virtualbox
http://forums.virtualbox.org/viewtopic.php?t=7022
_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Daniel Pittman
2010-11-18 12:13:54 UTC
Permalink
I use a CentOS VM (VMware) as a guest on a Windows XP host machine (strange
as that might appear). A lot of the time it works quite well, but some
things are rather less than satisfactory.
Untarring a 100 Mb file (not zipped) can take more than a minute whereas the
same task will take less than a second on a 'real' machine of the same
hardware.
An interesting question would be: does it take that long doing the
decompression in memory only? I bet that it is much faster if it doesn't
actually write to "disk".


[...]
I suspect that most of the problem is that the VM is not set up to use a
Linux filesystem. From what I can work out, VMware defaults to using the
host's filesystem for guests.
....not exactly, no. It uses a file stored on the host, but the performance
cost of that (while not zero) is nothing like the problem you highlight there.

It also has *nothing* to do with the contained or host file system,
particularly, except in the most abstract of ways.
I am wondering if it's worth putting effort into getting the reluctant help
people to modify it so that the diskspace reserved for the VM uses uses a
proper Linux filesystem.
I can't say for sure, but I would bet that it isn't.

I would strongly suggest that this is the typical performance profile of a VM
that is using emulated hardware for I/O rather than paravirtualized devices.

Check that you have the VMWare Guest drivers installed and fully functional in
that guest system, especially the VMWare disk drivers. That should deliver
the performance you are looking for.

(Generally, terrible I/O performance in any virtual environment means you
don't have the PV guest drivers installed or working.)

Daniel
--
✣ Daniel Pittman ✉ ***@rimspace.net ☎ +61 401 155 707
♽ made with 100 percent post-consumer electrons

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Patrick Connolly
2010-11-19 08:55:29 UTC
Permalink
Somewhere about Thu, 18-Nov-2010 at 11:13PM +1100 (give or take),
Daniel Pittman wrote:

[...]

|> An interesting question would be: does it take that long doing the
|> decompression in memory only? I bet that it is much faster if it
|> doesn't actually write to "disk".

I suspect that would be true, but the files do need to end up on the
disk. What is goihg on when sys is not clocking up time but user is?
I'm amazed how quickly it happens on 'real' hardware.

|>
|>
|> [...]
|>
|> > I suspect that most of the problem is that the VM is not set up
|> > to use a Linux filesystem. From what I can work out, VMware
|> > defaults to using the host's filesystem for guests.
|>
|> ....not exactly, no. It uses a file stored on the host, but the
|> performance cost of that (while not zero) is nothing like the
|> problem you highlight there.

I think I understand that, but I was under the impression that it is
possible to set aside real disk space for the guest to use its own
filesystem. I'd have thought that would be more efficient than the
host having a great big file that contains the guest's data. I'm not
speaking from any real experience and might be talking complete
rubbish.

|> It also has *nothing* to do with the contained or host file system,
|> particularly, except in the most abstract of ways.

That might mean I'm talking complete rubbish.

[...]

|> (Generally, terrible I/O performance in any virtual environment
|> means you don't have the PV guest drivers installed or working.)

Thanks for that information. I'm hoping it will give the
unenthusiastic help people an idea of what they should be looking at.

best
--
___ Patrick Connolly
{~._.~}
_( Y )_ Good judgment comes from experience
(:_~*~_:) Experience comes from bad judgment
(_)-(_)


_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Cliff Pratt
2010-11-19 21:51:01 UTC
Permalink
Post by Patrick Connolly
Somewhere about Thu, 18-Nov-2010 at 11:13PM +1100 (give or take),
[...]
|> An interesting question would be: does it take that long doing the
|> decompression in memory only? I bet that it is much faster if it
|> doesn't actually write to "disk".
I suspect that would be true, but the files do need to end up on the
disk. What is goihg on when sys is not clocking up time but user is?
I'm amazed how quickly it happens on 'real' hardware.
|>
|>
|> [...]
|>
|> > I suspect that most of the problem is that the VM is not set up
|> > to use a Linux filesystem. From what I can work out, VMware
|> > defaults to using the host's filesystem for guests.
|>
|> ....not exactly, no. It uses a file stored on the host, but the
|> performance cost of that (while not zero) is nothing like the
|> problem you highlight there.
I think I understand that, but I was under the impression that it is
possible to set aside real disk space for the guest to use its own
filesystem. I'd have thought that would be more efficient than the
host having a great big file that contains the guest's data. I'm not
speaking from any real experience and might be talking complete
rubbish.
A VM can use a) a file on a file system on a disk, b) a whole partition
on a disk, c) raw space on disk. All these appear as disks to the VM.

I'd assume that the disk speed increases from a) to c) but I've not done
the tests.

Cheers,

Cliff

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
t94xr
2010-11-20 11:48:46 UTC
Permalink
Hey, unfortunately I had to install Ubuntu 10.10 alternate onto my
laptop drive using usb inside vmware - for some reason the laptop
doesn't want to boot Ubuntu discs anymore.

Its a toshiba m40, when I log into it, it ran GNOME fine, when i
rebooted and even reinstalled it doesnt run GNOME anymore.
Yet when I login using desktop session (safe mode), it runs fine -
except without network.

Is there something I'm doing wrong here? Something I'm missing.

CameronW

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Nevyn
2010-11-20 18:00:41 UTC
Permalink
Hey, unfortunately I had to install Ubuntu 10.10 alternate onto my laptop
drive using usb inside vmware - for some reason the laptop doesn't want to
boot Ubuntu discs anymore.
Its a toshiba m40, when I log into it, it ran GNOME fine, when i rebooted
and even reinstalled it doesnt run GNOME anymore.
Yet when I login using desktop session (safe mode), it runs fine - except
without network.
Is there something I'm doing wrong here? Something I'm missing.
CameronW
Wow.... that was ambiguous. I did have a scenario the other day where
the 50th install resulted in just plain bad things happening. Freak
occurrence. Was fine after another install.

Check your cdrom drive...

Regards,
Nevyn
http://nevsramblings.blogspot.com/

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Daniel Pittman
2010-11-21 10:04:45 UTC
Permalink
Post by Patrick Connolly
Somewhere about Thu, 18-Nov-2010 at 11:13PM +1100 (give or take),
[...]
|> An interesting question would be: does it take that long doing the
|> decompression in memory only? I bet that it is much faster if it
|> doesn't actually write to "disk".
I suspect that would be true, but the files do need to end up on the
disk. What is goihg on when sys is not clocking up time but user is?
I'm amazed how quickly it happens on 'real' hardware.
I suggest it as a way to determine if they problem is specifically I/O
related, or if it is caused by something else in the VM. That way you would
know what to investigate next to track it down. :)
Post by Patrick Connolly
|> [...]
|>
|> > I suspect that most of the problem is that the VM is not set up
|> > to use a Linux filesystem. From what I can work out, VMware
|> > defaults to using the host's filesystem for guests.
|>
|> ....not exactly, no. It uses a file stored on the host, but the
|> performance cost of that (while not zero) is nothing like the
|> problem you highlight there.
I think I understand that, but I was under the impression that it is
possible to set aside real disk space for the guest to use its own
filesystem.
It is.
Post by Patrick Connolly
I'd have thought that would be more efficient than the host having a great
big file that contains the guest's data. I'm not speaking from any real
experience and might be talking complete rubbish.
Well, you are not wrong: raw disk is more efficient, because of two major
factors:

First, when the VM guest uses an elevator algorithm to optimize writes for
seeking, the location of the blocks on disk is more likely[1] to have a
predictable relationship to the location of the virtual blocks on disk.

Second, there is some overhead to using a file, and especially a sparse file,
since you have to allocate extents or blocks, write them, deal with
fragmentation, etc.


However, the interesting question is really "how much" rather than "is there
any" difference - and in practice you should see performance more or less
equivalent to random I/O on large files for the virtual disk.

Which, generally, means something comparable to your "not in the VM" numbers,
less whatever it costs in the overhead of getting requests out from the guest
to the host.
Post by Patrick Connolly
[...]
|> (Generally, terrible I/O performance in any virtual environment
|> means you don't have the PV guest drivers installed or working.)
Thanks for that information. I'm hoping it will give the unenthusiastic
help people an idea of what they should be looking at.
*nod* That would certainly be my first port of call in debugging it. For
what it is worth, most of the time 'lspci' will tell you if you have virtual
hardware or real hardware backing the disk devices. Just look for the VM
vendor name in the PCI ID. :)

Daniel

Footnotes:
[1] ...because you might have RAID, or LVM, or something else sitting between
the actual spindles and the "raw device" that the virtualization software
is writing to.
--
✣ Daniel Pittman ✉ ***@rimspace.net ☎ +61 401 155 707
♽ made with 100 percent post-consumer electrons

_______________________________________________
NZLUG mailing list ***@linux.net.nz
http://www.linux.net.nz/cgi-bin/mailman/listinfo/nzlug
Loading...