Discussion:
Memory usage - Virtual Machine - 7.1.x
Steve Malenfant
2018-09-10 13:21:44 UTC
Permalink
Ever since we upgraded to 6.x or 7.x, it seems like we have issues with
memory utilization climbing over time in a 4GB VM. It takes in average
about 5 days before it starts swapping.

Current version: 7.1.4

There is only a few requests getting to those VMs and they are creating
quite a problem on the VM hosts due to excessive swapping (IO). Most of
those requests, if not all are using the astats_over_http plugin for
Traffic Control.

I've condensed the memory dump just to show what's being used.
Seems like hdrStrHeap seems to be using quite a bit of memory and keeps
increasing over time.

Is there anything specific I would need to look into?

-----------------------------------------------------------------------------------------
Allocated | In-Use | Type Size | Free List Name
--------------------|--------------------|------------|----------------------------------
2097152 | 0 | 32768 |
memory/ioBufAllocator[8]
524288 | 0 | 8192 |
memory/ioBufAllocator[6]
96993280 | 96661504 | 4096 |
memory/ioBufAllocator[5]
16384 | 128 | 128 |
memory/ioBufAllocator[0]
73728 | 45312 | 96 |
memory/eventAllocator
1407600 | 1399440 | 80 |
memory/mutexAllocator
24576 | 22016 | 64 |
memory/ioBlockAllocator
1150560 | 1145328 | 48 |
memory/ioDataAllocator
65280 | 62400 | 240 | memory/ioAllocator
97760 | 25568 | 752 |
memory/netVCAllocator
114400 | 0 | 880 |
memory/sslNetVCAllocator
67712 | 33856 | 33856 |
memory/dnsBufAllocator
163840 | 0 | 1280 |
memory/dnsEntryAllocator
4096 | 16 | 16 |
memory/expiryQueueEntry
8192 | 64 | 64 |
memory/refCountCacheHashingValueAllocator
296960 | 2320 | 2320 |
memory/hostDBContAllocator
1162870784 | 1162797056 | 2048 | memory/hdrStrHeap
1162870784 | 1162862592 | 2048 | memory/hdrHeap
24576 | 6144 | 192 |
memory/httpServerSessionAllocator
1990656 | 0 | 7776 |
memory/httpSMAllocator
114688 | 30464 | 896 |
memory/http1ClientSessionAllocator
24576 | 6624 | 96 |
memory/INKContAllocator
4096 | 224 | 32 |
memory/apiHookAllocator
262144 | 0 | 1024 | memory/ArenaBlock
2431268112 | 2425101056 | | TOTAL
-----------------------------------------------------------------------------------------

Steve
Steve Malenfant
2018-10-17 19:00:39 UTC
Permalink
I tried the stats_over_http plugin today and it's behaving the same.
memory/hdrHeap and memory/hdrStrHeap are increasing on every requests.

Will file an issue. Not only a Virtual Machine issue.

Tried to get jemalloc to help us but I believe we failed. No output.

How to reproduce?

#!/bin/bash

while true; do

curl http://<ip>/_stats >/dev/null

done

Steve
Post by Steve Malenfant
Ever since we upgraded to 6.x or 7.x, it seems like we have issues with
memory utilization climbing over time in a 4GB VM. It takes in average
about 5 days before it starts swapping.
Current version: 7.1.4
There is only a few requests getting to those VMs and they are creating
quite a problem on the VM hosts due to excessive swapping (IO). Most of
those requests, if not all are using the astats_over_http plugin for
Traffic Control.
I've condensed the memory dump just to show what's being used.
Seems like hdrStrHeap seems to be using quite a bit of memory and keeps
increasing over time.
Is there anything specific I would need to look into?
-----------------------------------------------------------------------------------------
Allocated | In-Use | Type Size | Free List Name
--------------------|--------------------|------------|----------------------------------
2097152 | 0 | 32768 |
memory/ioBufAllocator[8]
524288 | 0 | 8192 |
memory/ioBufAllocator[6]
96993280 | 96661504 | 4096 |
memory/ioBufAllocator[5]
16384 | 128 | 128 |
memory/ioBufAllocator[0]
73728 | 45312 | 96 |
memory/eventAllocator
1407600 | 1399440 | 80 |
memory/mutexAllocator
24576 | 22016 | 64 |
memory/ioBlockAllocator
1150560 | 1145328 | 48 |
memory/ioDataAllocator
65280 | 62400 | 240 | memory/ioAllocator
97760 | 25568 | 752 |
memory/netVCAllocator
114400 | 0 | 880 |
memory/sslNetVCAllocator
67712 | 33856 | 33856 |
memory/dnsBufAllocator
163840 | 0 | 1280 |
memory/dnsEntryAllocator
4096 | 16 | 16 |
memory/expiryQueueEntry
8192 | 64 | 64 |
memory/refCountCacheHashingValueAllocator
296960 | 2320 | 2320 |
memory/hostDBContAllocator
1162870784 | 1162797056 | 2048 | memory/hdrStrHeap
1162870784 | 1162862592 | 2048 | memory/hdrHeap
24576 | 6144 | 192 |
memory/httpServerSessionAllocator
1990656 | 0 | 7776 |
memory/httpSMAllocator
114688 | 30464 | 896 |
memory/http1ClientSessionAllocator
24576 | 6624 | 96 |
memory/INKContAllocator
4096 | 224 | 32 |
memory/apiHookAllocator
262144 | 0 | 1024 | memory/ArenaBlock
2431268112 | 2425101056 | | TOTAL
-----------------------------------------------------------------------------------------
Steve
Alan Carroll
2018-10-31 16:00:08 UTC
Permalink
Looking at the memory dump, my first guess would be you have a lot of
stalled transactions that never got cleaned up. This is based on the
ioBufAllocator[5], which IIRC is the default size for the initial read. The
hdrStrHeap and hdrHeap are used for storing request / response headers in
memory. Those being so large seems to indicate that data isn't getting
cleaned up.
Post by Steve Malenfant
I tried the stats_over_http plugin today and it's behaving the same.
memory/hdrHeap and memory/hdrStrHeap are increasing on every requests.
Will file an issue. Not only a Virtual Machine issue.
Tried to get jemalloc to help us but I believe we failed. No output.
How to reproduce?
#!/bin/bash
while true; do
curl http://<ip>/_stats >/dev/null
done
Steve
Post by Steve Malenfant
Ever since we upgraded to 6.x or 7.x, it seems like we have issues with
memory utilization climbing over time in a 4GB VM. It takes in average
about 5 days before it starts swapping.
Current version: 7.1.4
There is only a few requests getting to those VMs and they are creating
quite a problem on the VM hosts due to excessive swapping (IO). Most of
those requests, if not all are using the astats_over_http plugin for
Traffic Control.
I've condensed the memory dump just to show what's being used.
Seems like hdrStrHeap seems to be using quite a bit of memory and keeps
increasing over time.
Is there anything specific I would need to look into?
-----------------------------------------------------------------------------------------
Allocated | In-Use | Type Size | Free List Name
--------------------|--------------------|------------|----------------------------------
2097152 | 0 | 32768 |
memory/ioBufAllocator[8]
524288 | 0 | 8192 |
memory/ioBufAllocator[6]
96993280 | 96661504 | 4096 |
memory/ioBufAllocator[5]
16384 | 128 | 128 |
memory/ioBufAllocator[0]
73728 | 45312 | 96 |
memory/eventAllocator
1407600 | 1399440 | 80 |
memory/mutexAllocator
24576 | 22016 | 64 |
memory/ioBlockAllocator
1150560 | 1145328 | 48 |
memory/ioDataAllocator
65280 | 62400 | 240 | memory/ioAllocator
97760 | 25568 | 752 |
memory/netVCAllocator
114400 | 0 | 880 |
memory/sslNetVCAllocator
67712 | 33856 | 33856 |
memory/dnsBufAllocator
163840 | 0 | 1280 |
memory/dnsEntryAllocator
4096 | 16 | 16 |
memory/expiryQueueEntry
8192 | 64 | 64 |
memory/refCountCacheHashingValueAllocator
296960 | 2320 | 2320 |
memory/hostDBContAllocator
1162870784 | 1162797056 | 2048 | memory/hdrStrHeap
1162870784 | 1162862592 | 2048 | memory/hdrHeap
24576 | 6144 | 192 |
memory/httpServerSessionAllocator
1990656 | 0 | 7776 |
memory/httpSMAllocator
114688 | 30464 | 896 |
memory/http1ClientSessionAllocator
24576 | 6624 | 96 |
memory/INKContAllocator
4096 | 224 | 32 |
memory/apiHookAllocator
262144 | 0 | 1024 | memory/ArenaBlock
2431268112 | 2425101056 | | TOTAL
-----------------------------------------------------------------------------------------
Steve
--
*Beware the fisherman who's casting out his line in to a dried up riverbed.*
*Oh don't try to tell him 'cause he won't believe. Throw some bread to the
ducks instead.*
*It's easier that way. *- Genesis : Duke : VI 25-28
Leif Hedstrom
2018-10-31 16:15:57 UTC
Permalink
Looking at the memory dump, my first guess would be you have a lot of stalled transactions that never got cleaned up. This is based on the ioBufAllocator[5], which IIRC is the default size for the initial read. The hdrStrHeap and hdrHeap are used for storing request / response headers in memory. Those being so large seems to indicate that data isn't getting cleaned up.
Oh, we should have followed up on this. We worked with Steve for a bit, and tracked it down to the stale-while-revalidate plugin. Since this plugin is dead now, I don’t think it’s worthwhile to try to fix it (but, we’ll take patches for 7.1.x if anyone wants to work on it).

We need a better alternative for stale-while-revalidate, since the plugin has always been pretty darn crippled :).

Cheers,

— leif

Loading...