Member of The Internet Defense League Últimos cambios
Últimos Cambios
Blog personal: El hilo del laberinto Geocaching

Técnicas y herramientas para diagnosticar y solucionar 'memory leaks'

Última Actualización: 3 de abril de 2014

Notas que utilicé para mi presentación en la reunión mensual de Python Madrid, del 11 de diciembre de 2013.

No hay presentación propiamente dicha. La charla fue muy práctica.

Tengo un audio grabado de 1 hora y 40 minutos. No lo publico porque tiene una calidad mediocre, pero está disponible bajo petición.

Búsqueda en Google del hilo que originó todo esto.

Las conclusiones principales:

  • En procesos (demonios) de larga duración, es extremadamente útil tener un hilo dedicado a atender un CLI, para poder conectarse al proceso desde un terminal y realizar operaciones interactivas sin interrumpir el funcionamiento normal del proceso: recorrer sus estructuras de datos, contar objetos, forzar una recogida de basuras, etc.

  • Evitar las referencias cíclicas en los datos, siempre que sea posible. Estudiar el módulo weakref de la librería estándar.

  • Estudiar el módulo gc de la librería estándar.

  • Para verificar si el problema son los ciclos, podemos ver si se liberan objetos al hacer un gc.collect(). En caso necesario, afinar con gc.set_threshold().

  • Las funciones gc.get_referrers() y gc.get_referents() son impagables, sobre todo si podemos hacer una sesión de terminal al proceso y estudiarlo de forma interactiva, mientras está funcionando.

  • Ojo con capturar estructuras de datos grandes en excepciones, ya que pueden formar un ciclo y tardar en limpiarse, o no limpiarse hasta que surge otra excepción.

  • Ninguna herramienta es una bala de plata. Cuando la situación es compleja, al final hay que currárselo a mano.

  • Muchas de las herramientas más conocidas están abandonadas y no es que no funcionen en Python 3, sino que muchas ni siquiera funcionan bien en Python 2.7.

  • Python 3.4 incluirá tracemalloc, que debería ayudar mucho. Es muy importante que la librería estándar Python incluya esta funcionalidad, porque marca un mínimo de calidad para otras herramientas externas. El trabajo de integración de esta funcionalidad provee interfaces usables por otras herramientas:

    Existe un parche para proporcionar este módulo en versiones previas de Python, aunque en este problema concreto no ha funcionado correctamente.

  • Ojo con los ciclos con destructores, si estamos usando una versión de Python anterior a la 3.4. PEP 442: Safe object finalization.


Notas

Las notas no son una presentación en sí sino, en cierto modo, un log de las pruebas que estaba haciendo.

http://mg.pov.lt/objgraph/
https://pypi.python.org/pypi/Dozer <- Debugger WSGI
http://jmdana.github.io/memprof/
http://pythonhosted.org/Pympler/muppy.html#muppy
https://pypi.python.org/pypi/memory_profiler <-Linea a linea
https://github.com/smartfile/caulk/
https://github.com/wyplay/pytracemalloc <- Python 3.4
https://pypi.python.org/pypi/meliae
http://guppy-pe.sourceforge.net/  <- Tesis doctoral

Nov 19 05:22:44 MeteoPI kernel: [230778.537233] weewxd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Nov 19 05:22:44 MeteoPI kernel: [230778.537309] [<c0013b60>] (unwind_backtrace+0x0/0xf0) from [<c039c4e0>] (dump_header.isra.15+0x74/0x1a0)
Nov 19 05:22:44 MeteoPI kernel: [230778.537359] [<c039c4e0>] (dump_header.isra.15+0x74/0x1a0) from [<c0092050>] (oom_kill_process+0x294/0x418)
Nov 19 05:22:44 MeteoPI kernel: [230778.537395] [<c0092050>] (oom_kill_process+0x294/0x418) from [<c00925b8>] (out_of_memory+0x1cc/0x29c)
Nov 19 05:22:44 MeteoPI kernel: [230778.537426] [<c00925b8>] (out_of_memory+0x1cc/0x29c) from [<c0095a84>] (__alloc_pages_nodemask+0x704/0x738)
Nov 19 05:22:44 MeteoPI kernel: [230778.537453] [<c0095a84>] (__alloc_pages_nodemask+0x704/0x738) from [<c0091070>] (filemap_fault+0x250/0x47c)
Nov 19 05:22:44 MeteoPI kernel: [230778.537485] [<c0091070>] (filemap_fault+0x250/0x47c) from [<c00a8f00>] (__do_fault+0x6c/0x4a0)
Nov 19 05:22:44 MeteoPI kernel: [230778.537515] [<c00a8f00>] (__do_fault+0x6c/0x4a0) from [<c00abb9c>] (handle_pte_fault+0x74/0x788)
Nov 19 05:22:44 MeteoPI kernel: [230778.537541] [<c00abb9c>] (handle_pte_fault+0x74/0x788) from [<c00ac348>] (handle_mm_fault+0x98/0xc8)
Nov 19 05:22:44 MeteoPI kernel: [230778.537581] [<c00ac348>] (handle_mm_fault+0x98/0xc8) from [<c03a3048>] (do_page_fault+0x22c/0x3cc)
Nov 19 05:22:44 MeteoPI kernel: [230778.537610] [<c03a3048>] (do_page_fault+0x22c/0x3cc) from [<c00083c4>] (do_PrefetchAbort+0x34/0x98)
Nov 19 05:22:44 MeteoPI kernel: [230778.537639] [<c00083c4>] (do_PrefetchAbort+0x34/0x98) from [<c03a1c54>] (ret_from_exception+0x0/0x10)
Nov 19 05:22:44 MeteoPI kernel: [230778.537652] Exception stack(0xcd9b9fb0 to 0xcd9b9ff8)
Nov 19 05:22:44 MeteoPI kernel: [230778.537669] 9fa0:                                     00d2a160 00814700 007fe638 000c21e8
Nov 19 05:22:44 MeteoPI kernel: [230778.537689] 9fc0: 007bcc3e 00814700 00933480 00d2a160 00814700 007bcc3e 00933484 00933338
Nov 19 05:22:44 MeteoPI kernel: [230778.537705] 9fe0: 007bcc34 b5d9c6c0 001650e0 000c21e8 20000010 ffffffff
Nov 19 05:22:44 MeteoPI kernel: [230778.537714] Mem-info:
Nov 19 05:22:44 MeteoPI kernel: [230778.537722] Normal per-cpu:
Nov 19 05:22:44 MeteoPI kernel: [230778.537732] CPU    0: hi:  186, btch:  31 usd:  58
Nov 19 05:22:44 MeteoPI kernel: [230778.537758] active_anon:59239 inactive_anon:59239 isolated_anon:0
Nov 19 05:22:44 MeteoPI kernel: [230778.537758]  active_file:99 inactive_file:142 isolated_file:0
Nov 19 05:22:44 MeteoPI kernel: [230778.537758]  unevictable:0 dirty:2 writeback:1 unstable:0
Nov 19 05:22:44 MeteoPI kernel: [230778.537758]  free:2034 slab_reclaimable:595 slab_unreclaimable:878
Nov 19 05:22:44 MeteoPI kernel: [230778.537758]  mapped:96 shmem:4 pagetables:427 bounce:0
Nov 19 05:22:44 MeteoPI kernel: [230778.537808] Normal free:8136kB min:8192kB low:10240kB high:12288kB active_anon:236956kB inactive_anon:236956kB active_file:396kB inactive_file:568kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:503936kB mlocked:0kB dirty:8kB writeback:4kB mapped:384kB shmem:16kB slab_reclaimable:2380kB slab_unreclaimable:3512kB kernel_stack:1016kB pagetables:1708kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5640 all_unreclaimable? yes
Nov 19 05:22:44 MeteoPI kernel: [230778.537821] lowmem_reserve[]: 0 0
Nov 19 05:22:44 MeteoPI kernel: [230778.537838] Normal: 186*4kB 74*8kB 43*16kB 21*32kB 9*64kB 6*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 8136kB
Nov 19 05:22:44 MeteoPI kernel: [230778.537879] 528 total pagecache pages
Nov 19 05:22:44 MeteoPI kernel: [230778.537887] 273 pages in swap cache
Nov 19 05:22:44 MeteoPI kernel: [230778.537897] Swap cache stats: add 25967, delete 25694, find 545/585
Nov 19 05:22:44 MeteoPI kernel: [230778.537903] Free swap  = 0kB
Nov 19 05:22:44 MeteoPI kernel: [230778.537909] Total swap = 102396kB
Nov 19 05:22:44 MeteoPI kernel: [230778.577160] 126976 pages of RAM
Nov 19 05:22:44 MeteoPI kernel: [230778.577184] 2333 free pages
Nov 19 05:22:44 MeteoPI kernel: [230778.577191] 2600 reserved pages
Nov 19 05:22:44 MeteoPI kernel: [230778.577213] 981 slab pages
Nov 19 05:22:44 MeteoPI kernel: [230778.577222] 164 pages shared
Nov 19 05:22:44 MeteoPI kernel: [230778.577228] 273 pages swap cached
Nov 19 05:22:44 MeteoPI kernel: [230778.577237] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Nov 19 05:22:44 MeteoPI kernel: [230778.577277] [  154]     0   154      720        0       5      133         -1000 udevd
Nov 19 05:22:44 MeteoPI kernel: [230778.577296] [  643]     0   643      719        0       5      137         -1000 udevd
Nov 19 05:22:44 MeteoPI kernel: [230778.577313] [  651]     0   651      719        0       5      136         -1000 udevd
Nov 19 05:22:44 MeteoPI kernel: [230778.577330] [ 1542]     0  1542      436       13       4       18         -1000 ifplugd
Nov 19 05:22:44 MeteoPI kernel: [230778.577347] [ 1575]     0  1575      436       13       5       16         -1000 ifplugd
Nov 19 05:22:44 MeteoPI kernel: [230778.577364] [ 1845]     0  1845     6992       73       7       57             0 rsyslogd
Nov 19 05:22:44 MeteoPI kernel: [230778.578120] [ 1894]     0  1894     1085       14       6       31             0 cron
Nov 19 05:22:44 MeteoPI kernel: [230778.578149] [ 1949]     0  1949     1223       43       5      388         -1000 dhclient
Nov 19 05:22:44 MeteoPI kernel: [230778.578168] [ 1968]   104  1968      794        0       4       68             0 dbus-daemon
Nov 19 05:22:44 MeteoPI kernel: [230778.578185] [ 1997]   102  1997     1377       51       6       64             0 ntpd
Nov 19 05:22:44 MeteoPI kernel: [230778.578203] [ 2030] 65534  2030     1352      126       6       87             0 openvpn
Nov 19 05:22:44 MeteoPI kernel: [230778.578219] [ 2063] 65534  2063      503        8       4       26             0 thd
Nov 19 05:22:44 MeteoPI kernel: [230778.578236] [ 2080]     0  2080     1552        0       5      106         -1000 sshd
Nov 19 05:22:44 MeteoPI kernel: [230778.578254] [ 2101]  1002  2101   149318   117988     291    23836             0 weewxd
Nov 19 05:22:44 MeteoPI kernel: [230778.578272] [ 2130]     0  2130      436       13       4       16         -1000 ifplugd
Nov 19 05:22:44 MeteoPI kernel: [230778.578289] [ 2131]     0  2131     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578306] [ 2132]     0  2132     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578322] [ 2133]     0  2133     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578339] [ 2134]     0  2134     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578355] [ 2135]     0  2135     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578374] [ 2136]     0  2136     1072        0       6       32             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578391] [ 2137]     0  2137      515        0       5       31             0 getty
Nov 19 05:22:44 MeteoPI kernel: [230778.578408] [ 2155]     0  2155     6629        0      10      272             0 console-kit-dae
Nov 19 05:22:44 MeteoPI kernel: [230778.578425] [ 2222]     0  2222     5571        0      10      120             0 polkitd
Nov 19 05:22:44 MeteoPI kernel: [230778.578440] Out of memory: Kill process 2101 (weewxd) score 947 or sacrifice child
Nov 19 05:22:44 MeteoPI kernel: [230778.578455] Killed process 2101 (weewxd) total-vm:597272kB, anon-rss:471696kB, file-rss:256kB




* Parcheo python 2.7 con "tracemalloc".

Se activa con:

import tracemalloc
tracemalloc.enable()
take_snapshot = tracemalloc.TakeSnapshot()
take_snapshot.filename_template = "/tmp/trace-$pid-$timestamp.pickle"
take_snapshot.start(60.0)

* Adjuntar ocupación de memoria.

* Adjuntar consumo de objetos.

-rw-r--r-- 1 weewx weewx 68410 Dec  6 05:26 trace-13222-2013-12-06-05:26:16.pickle
root@MeteoPI:/tmp# /opt/python-pytracemalloc/bin/python -m tracemalloc -l trace-13222-2013-12-06-05:26:16.pickle
tracemalloc 0.9.1

2013-12-06 05:26:16: Top 10 allocations per file and line
#1: .../python2.7/site-packages/configobj.py:531: size=24 MiB, count=79565, average=317 B
#2: .../python2.7/site-packages/configobj.py:603: size=23 MiB, count=292536, average=83 B
#3: .../bin/weedb/sqlite.py:113: size=16 MiB, count=365652, average=46 B
#4: .../python2.7/site-packages/configobj.py:604: size=12 MiB, count=13232, average=983 B
#5: .../python2.7/site-packages/configobj.py:640: size=11 MiB, count=10280, average=1125 B
#6: .../python2.7/site-packages/configobj.py:1318: size=10 MiB, count=179665, average=61 B
#7: .../python2.7/site-packages/configobj.py:1366: size=8 MiB, count=7724, average=1110 B
#8: .../bin/weedb/sqlite.py:117: size=6 MiB, count=202426, average=35 B
#9: .../python2.7/site-packages/configobj.py:1628: size=6 MiB, count=229253, average=30 B
#10: .../python2.7/site-packages/configobj.py:1610: size=6 MiB, count=45642, average=148 B
5734 more: size=62 MiB, count=1070906, average=61 B
Total Python memory: size=188 MiB, count=2496881, average=79 B
Total process memory: size=457 MiB (ignore tracemalloc: 32 KiB)

1 snapshots

Lineas de código:

Nos centramos en configobj.py.

Línea 531:

        self.scalars = []
        # the sequence of sections in this Section
        self.sections = []
        # for comments :-)
        self.comments = {}

Linea 603:

        # add the comment
        if key not in self.comments:
            self.comments[key] = []
            self.inline_comments[key] = ''
        # remove the entry from defaults

Linea 1318:

                break

            infile = [line.rstrip('\r\n') for line in infile]

        self._parse(infile)

Linea 1628:

                # is a keyword value
                # value will include any inline comment
                (indent, key, value) = mat.groups()
                if indent and (self.indent_type is None):
                    self.indent_type = indent


Usamos guppy.heapy:

+from guppy.heapy import Remote
+Remote.on()

Me conecto y pregunto: OJO, asegurarse de que estoy conectando al proceso correcto.

/opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
root@MeteoPI:/tmp# /opt/python-pytracemalloc/bin/python -m tracemalloc -l trace-18753-2013-12-06-21:06:07.pickle
tracemalloc 0.9.1

2013-12-06 21:06:07: Top 10 allocations per file and line
#1: .../site-packages/Cheetah/Compiler.py:1935: size=989 KiB, count=31, average=31 KiB
#2: .../site-packages/Cheetah/Compiler.py:1454: size=918 KiB, count=30, average=30 KiB
#3: .../site-packages/Cheetah/Compiler.py:363: size=903 KiB, count=999, average=925 B
#4: .../site-packages/Cheetah/Template.py:787: size=625 KiB, count=9725, average=65 B
#5: .../python2.7/site-packages/configobj.py:532: size=560 KiB, count=2795, average=205 B
#6: .../site-packages/Cheetah/Compiler.py:392: size=390 KiB, count=3777, average=105 B
#7: .../bin/weedb/sqlite.py:113: size=333 KiB, count=7191, average=47 B
#8: .../python2.7/site-packages/configobj.py:604: size=329 KiB, count=3782, average=89 B
#9: .../python2.7/site-packages/configobj.py:1319: size=238 KiB, count=3853, average=63 B
#10: .../site-packages/Cheetah/Compiler.py:396: size=217 KiB, count=1863, average=119 B
5726 more: size=7 MiB, count=98640, average=75 B
Total Python memory: size=12 MiB, count=132686, average=98 B
Total process memory: size=38 MiB (ignore tracemalloc: 12 KiB)

1 snapshots

root@MeteoPI:/tmp# /opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"
<Monitor> 
*** Connection 1 opened ***
<Monitor> sc 1
Remote connection 1. To return to Monitor, type <Ctrl-C> or .<RETURN>
<Annex> ?

Documented commands (type help <topic>):
========================================
close  h  help  int  isolatest  q  reset  stat

<Annex> int
Remote interactive console. To return to Annex, type '-'.
>>> import configobj
>>> import sys
>>> sys.getrefcount(configobj.ConfigObj)
3
>>> import guppy
>>> from guppy import hpy
>>> hp = hpy()
>>> hp.heap()
Exception in thread Thread-14:
Traceback (most recent call last):
  File "/opt/python-pytracemalloc/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/opt/python-pytracemalloc/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 146, in run
    line = ostdin.input.readline()
  File "/opt/python-pytracemalloc/lib/python2.7/socket.py", line 447, in readline
    data = self._sock.recv(self._rbufsize)
timeout: timed out

Traceback (most recent call last):
  File "<console>", line 1, in <module>
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Use.py", line 188, in heap
    h = self.View.heap()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/View.py", line 361, in heap
    return self.enter(lambda:
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/View.py", line 249, in enter
    type, value, traceback = sys.exc_info()
SocketClosed

....
>>> hp.setref()
>>> hp.heap()

(fallos y timeouts para aburrir)

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Poco útil porque solo tenemos tipos básicos.

Analicemos clases:

>>> import configobj
>>> import sys
>>> sys.getrefcount(configobj.Section)
8
>>> sys.getrefcount(configobj.ConfigObj)
3
>>> sys.getrefcount(str)
138
>>> hp.heap()
Partition of a set of 33172 objects. Total size = 2497300 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  15071  45  1019320  41   1019320  41 str
     1   7392  22   301576  12   1320896  53 tuple
     2     96   0   150528   6   1471424  59 dict of module
     3   2028   6   146016   6   1617440  65 types.CodeType
     4    334   1   145256   6   1762696  71 dict (no owner)
     5    239   1   122356   5   1885052  75 dict of type
     6   1955   6   117300   5   2002352  80 function
     7    240   1   107200   4   2109552  84 type
     8    159   0    83316   3   2192868  88 dict of class
     9   1080   3    43200   2   2236068  90 __builtin__.wrapper_descriptor
<126 more rows. Type e.g. '_.more' to view.>
>>> import sys
>>> sys.getrefcount(str)
90

Para los tipos básicos, el contador de referencias no se actualiza (investigarlo) :

>>> sys.getrefcount(str)
77
>>> a=" dfgd df g dfg"
>>> sys.getrefcount(str)
77

>>> import configobj
>>> sys.getrefcount(configobj.milista)
5
>>> sys.getrefcount(configobj.midict)
5
>>> b=configobj.milista()
>>> sys.getrefcount(configobj.milista)
6

Parece que los atributos se los apropia algún otro objeto. ¿Cómo saberlo?

Podemos modificar el programa para registrar esos atributos en algún sitio con "weakref" o bien podemos crear una clase propia.

class milista(list) : pass
class midict(dict): pass

...

        self.scalars = []
        # the sequence of sections in this Section
        self.sections = milista()
        # for comments :-)
        self.comments = midict()
        self.inline_comments = {}
...

        # add the comment
        if key not in self.comments:
            self.comments[key] = milista()
            self.inline_comments[key] = ''


Tenemos:

>>> sys.getrefcount(configobj.milista)
5
>>> sys.getrefcount(configobj.midict)
5

¿Se trata de una falsa alarma?. En este momento el programa mide 24 megas. Es decir, el "leak" sigue su curso, pero no está donde parece que está. Con DTRACE sería trivial de saber, al menos, qué clase se está generando a saco, y dónde.



>>> from guppy import hpy
>>> hp = hpy()
>>> hp.setref()

Luego vamos viendo los incrementales poco a poco.
>>> hp.heap()

No veo crecimiento, a pesar de que sigo perdiendo memoria.

Utilizo ahora pympler, entrando a través de la consola remota que me da guppy:

>>> from pympler import muppy
>>> all_objects = muppy.get_objects()
>>> len(all_objects)    # DEJANDO VARIOS MINUTOS ENTRE LOS VOLCADOS
32941
>>> all_objects = muppy.get_objects(); len(all_objects)
32940
>>> all_objects = muppy.get_objects(); len(all_objects)
32944
>>> all_objects = muppy.get_objects(); len(all_objects)
32966
>>> all_objects = muppy.get_objects(); len(all_objects)
32967
>>> all_objects = muppy.get_objects(); len(all_objects)
32991

>>> from pympler import summary
>>> sum1 = summary.summarize(all_objects)
tiempo despues
>>> sum2 = summary.summarize(muppy.get_objects())
>>> diff = summary.get_diff(sum1, sum2)
>>> summary.print_(diff)
                       types |   # objects |   total size
============================ | =========== | ============
                        list |        1693 |    226.69 KB
                         str |        1699 |     69.24 KB
                         int |         142 |      1.66 KB
      exceptions.SyntaxError |           4 |    240     B
                   traceback |           3 |    120     B
                     unicode |           1 |    116     B
                        long |           1 |     14     B
             function (read) |           0 |      0     B
  module(guppy.heapy.UniSet) |           0 |      0     B
        function (b64decode) |           0 |      0     B
       function (splitvalue) |           0 |      0     B
          function (ioready) |           0 |      0     B
             <class '_ast.Eq |           0 |      0     B
         function (rollover) |           0 |      0     B
           function (ptable) |           0 |      0     B

tiempo después

>>> sum2 = summary.summarize(muppy.get_objects())
>>> diff = summary.get_diff(sum1, sum2)
>>> summary.print_(diff)
                       types |   # objects |   total size
============================ | =========== | ============
                        list |        6763 |    485.72 KB
                         str |        5079 |    207.14 KB
                         int |         421 |      4.93 KB
      exceptions.SyntaxError |           4 |    240     B
                   traceback |           3 |    120     B
                     unicode |           1 |    116     B
                        long |           4 |     54     B
             function (read) |           0 |      0     B
  module(guppy.heapy.UniSet) |           0 |      0     B
        function (b64decode) |           0 |      0     B
       function (splitvalue) |           0 |      0     B
          function (ioready) |           0 |      0     B
             <class '_ast.Eq |           0 |      0     B
         function (rollover) |           0 |      0     B
           function (ptable) |           0 |      0     B

>>> len(muppy.get_objects())
45262
>>> import gc
>>> gc.collect()
0
>>> len(muppy.get_objects())
45262

>>> sum1 = sum2
>>> sum2 = summary.summarize(muppy.get_objects())
>>> diff = summary.get_diff(sum1, sum2)
>>> summary.print_(diff)
                       types |   # objects |   total size
============================ | =========== | ============
                        list |        3380 |    172.69 KB
                         str |        1689 |     68.93 KB
                         int |         157 |      1.84 KB
                        long |           2 |     26     B
             function (read) |           0 |      0     B
  module(guppy.heapy.UniSet) |           0 |      0     B
        function (b64decode) |           0 |      0     B
       function (splitvalue) |           0 |      0     B
          function (ioready) |           0 |      0     B
             <class '_ast.Eq |           0 |      0     B
         function (rollover) |           0 |      0     B
           function (ptable) |           0 |      0     B
          function (S_ISREG) |           0 |      0     B
       function (isreadable) |           0 |      0     B
            function (c_len) |           0 |      0     B

>>> len(muppy.get_objects())
60317

Bueno, parece que ya veo leak, por fin.

>>> import configobj
>>> sys.getrefcount(configobj.ConfigObj)
3
>>> sys.getrefcount(configobj.Section)
8

Mi código:

weewx@MeteoPI:~$ cat bin/weewx/memoryleak.py
from functools import wraps
import os
def logMem(f) :
    pid = os.getpid()
    @wraps(f)
    def wrapper(*args, **kwargs) :
        size = int(open("/proc/%d/statm" %pid).read().split()[0])
        try :
            return f(*args, **kwargs)
        finally :
            size2 = int(open("/proc/%d/statm" %pid).read().split()[0])
            print >>open("/tmp/zz", "a"), \
                f.__module__+"."+f.__name__,size, size2
    return wrapper

Obtengo cosas como:

reportengine (run) 29073 29225
weewx.cheetahgenerator.generate 29224 29224
weewx.cheetahgenerator.generate 29224 29224
weewx.cheetahgenerator.generate 29224 29224
weewx.cheetahgenerator.run 29224 29224
weewx.cheetahgenerator.generate 29410 29410
weewx.cheetahgenerator.run 29410 29410
reportengine (run) 29224 29410
weewx.cheetahgenerator.generate 29410 29410
weewx.cheetahgenerator.generate 29410 29410
weewx.cheetahgenerator.generate 29410 29410
weewx.cheetahgenerator.run 29410 29410
weewx.cheetahgenerator.generate 29600 29599
weewx.cheetahgenerator.run 29600 29599
reportengine (run) 29410 29599

root@MeteoPI:/tmp# grep reportengine zz
...
reportengine (run) 27999 28238
reportengine (run) 28238 28422
reportengine (run) 28421 28543
reportengine (run) 28542 28962
reportengine (run) 28962 29073
reportengine (run) 29073 29225
reportengine (run) 29224 29410
reportengine (run) 29410 29599

Parece que el memory leak está en esta función. También podría ser que para hacer su trabajo consume mucha memoria, pero luego libera dichos objetos y python no devuelve la memoria al sistema operativo. Pero entonces esa memoria quedaría libre y la usaría el resto de python, incluyendo esta función cuando se vuelve a llamar.

¿Puedo usar mi decorador para localizar EXACTAMENTE dónde se pierde memoria?.

¿Usar https://pypi.python.org/pypi/contextdecorator para poder usar mi clase como contexto y como decorador?. Como decorador, decora una función. Como contexto, puedo interceptar parte del cuerpo de una función.


Veo lo siguiente:

Informes setup 17377 17377
Preparativos para el informe 17377 17633
weewx.cheetahgenerator.generate 17633 17633
weewx.cheetahgenerator.generate 17633 17633
weewx.cheetahgenerator.generate 17633 17633
weewx.cheetahgenerator.run 17633 17633
Bucle Generator 17633 17768
Preparativos para el informe 17768 17768
weewx.cheetahgenerator.generate 17768 17768
weewx.cheetahgenerator.run 17768 17768
Bucle Generator 17768 17768
Preparativos para el informe 17768 17768
Bucle Generator 17768 17768
Preparativos para el informe 17768 17768
Bucle Generator 17768 17768
reportengine (run) 17377 17768

Refinando la posición de las sondas, llego a:

/home/weewx/bin/weewx/imagegenerator.py :

    def genImages(self, gen_ts):
...
                # Create a new instance of a time plot and start adding to it
                plot = weeplot.genplot.TimePlot(plot_options)
...
                # OK, the plot is ready. Render it onto an image
                with memoryleak.logMem2("RENDER"):
                  image = plot.render()
                
                # Now save the image
                with memoryleak.logMem2("SAVE"):
                  image.save(img_file)
                ngen += 1

¿Dónde parece que debería estar el problema? ¿En el dibujado de un gráfico completo o a la hora de grabar a disco una imagen ya generada?

RENDER 15349 15349
SAVE 15349 15383
...
RENDER 15561 15561
SAVE 15561 15606
...
RENDER 15898 15898
SAVE 15898 15945
...
RENDER 16088 16088
SAVE 16088 16135
...
RENDER 16296 16343  # Anomalía
SAVE 16343 16343
...
RENDER 16343 16343
SAVE 16343 16407
...

Cambiemos el programa:

                # OK, the plot is ready. Render it onto an image
                with memoryleak.logMem2("RENDER"):
                  image = plot.render()

                # Now save the image
                with memoryleak.logMem2("SAVE"):
                  image.save(img_file)
                  for i in xrange(100) :
                      image.save("/tmp/dummy.png")
                ngen += 1

...

root@MeteoPI:~# /opt/python-pytracemalloc/bin/python
Python 2.7.6 (default, Nov 26 2013, 22:07:44) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import PIL
>>> PIL.PILLOW_VERSION
'2.1.0'

No hay mayor consumo de memoria. Hipótesis: Este método consume bastante memoria y luego la libera. Esa memoria liberada la reutilizan los objetos que hacen leak, así que a la hora de grabar la imagen otra vez, hay que hacer un nuevo "malloc". Por tanto, el malloc se produce aquí, pero no es la causa.

Nuevamente, tenemos que irnos a qué clases se instancian más. Pero al trabajar con tipos primitivos, está complicado.

Volvemos a guppy:

/opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"

>>> from guppy import hpy
>>> hp = hpy()
>>> hp.heap()
Partition of a set of 127396 objects. Total size = 13380596 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  57068  45  4303080  32   4303080  32 str
     1   4411   3  3604412  27   7907492  59 unicode
     2  25791  20  1122476   8   9029968  67 tuple
     3   1390   1   693416   5   9723384  73 dict (no owner)
     4    333   0   509340   4  10232724  76 dict of module
     5   6795   5   489240   4  10721964  80 types.CodeType
     6   6627   5   397620   3  11119584  83 function
     7    664   1   317792   2  11437376  85 dict of type
     8   5441   4   307172   2  11744548  88 list
     9    664   1   298804   2  12043352  90 type
<326 more rows. Type e.g. '_.more' to view.>
>>> hp.setrelheap()
*** Connection 1 closed ***
<Monitor>
*** Connection 1 opened ***
<Monitor> sc 1
Remote connection 1. To return to Monitor, type <Ctrl-C> or .<RETURN>
Traceback (most recent call last):
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 394, in run
    self.cmdloop()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/etc/cmd.py", line 137, in cmdloop
    line = self.stdin.readline()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 54, in readline
    return self.input.readline(size)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 38, in readline
    return self.get(timeout=0.5)
  File "/opt/python-pytracemalloc/lib/python2.7/Queue.py", line 182, in get
    self.not_empty.release()
error: release unlocked lock
<Annex> sc 1
(cuelgue)
^CKeyboardInterrupt
Use the ki command to interrupt a remote process.
(mato el intérprete)
root@MeteoPI:~# /opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Monitor.py", line 476, in monitor
    m = Monitor()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Monitor.py", line 361, in __init__
    self.connection = self.monitor_connection = MonitorConnection(self)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Monitor.py", line 146, in __init__
    self.server = s = Server((LOCALHOST, HEAPYPORT), Handler)
  File "/opt/python-pytracemalloc/lib/python2.7/SocketServer.py", line 419, in __init__
    self.server_bind()
  File "/opt/python-pytracemalloc/lib/python2.7/SocketServer.py", line 430, in server_bind
    self.socket.bind(self.server_address)
  File "/opt/python-pytracemalloc/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use

(esperamos unos minutos)

root@MeteoPI:~# /opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"
<Monitor> sc 1
Remote connection 1. To return to Monitor, type <Ctrl-C> or .<RETURN>
----------------------------------------
Exception happened during processing the command 'sc 1'
Traceback (most recent call last):
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Monitor.py", line 177, in exec_cmd
    stop = self.onecmd(line)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/etc/cmd.py", line 224, in onecmd
    return func(arg)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Monitor.py", line 331, in do_sc
    self.monitor.set_connection(self.monitor.connections[self.conid])
KeyError: 1
----------------------------------------
<Monitor> exit
root@MeteoPI:~# /opt/python-pytracemalloc/bin/python -c "from guppy import hpy; hpy().monitor()"
<Monitor> lc
CID PID ARGV

(reiniciamos el proceso remoto)

>>> h = hp.heap()
>>> dir(h)
['__and__', '__call__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattr__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__invert__', '__iter__', '__le__', '__len__', '__lshift__', '__lt__', '__module__', '__mul__', '__ne__', '__new__', '__nonzero__', '__or__', '__rand__', '__reduce__', '__reduce_ex__', '__repr__', '__ror__', '__rsub__', '__rxor__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__sub__', '__subclasshook__', '__xor__', '_derive_origin_', '_doc_nodes', '_er', '_get_help', '_help_url_', '_hiding_tag_', '_instahelp_', '_origin_', '_partition', 'biper', 'brief', 'by', 'byclass', 'byclodo', 'byid', 'byidset', 'bymodule', 'byrcs', 'bysize', 'bytype', 'byunity', 'byvia', 'count', 'dictof', 'diff', 'disjoint', 'doc', 'dominos', 'domisize', 'dump', 'er', 'fam', 'get_ckc', 'get_examples', 'get_render', 'get_rp', 'get_shpaths', 'imdom', 'indisize', 'kind', 'maprox', 'more', 'nodes', 'owners', 'partition', 'parts', 'pathsin', 'pathsout', 'referents', 'referrers', 'rp', 'shpaths', 'size', 'sp', 'stat', 'test_contains', 'theone']

>>> h.bymodule
Partition of a set of 9593 objects. Total size = 701912 bytes.
 Index  Count   %     Size   % Cumulative  % Module
     0   9585 100   701688 100    701688 100 ~module
     1      1   0       28   0    701716 100 dis
     2      1   0       28   0    701744 100 imp
     3      1   0       28   0    701772 100 inspect
     4      1   0       28   0    701800 100 opcode
     5      1   0       28   0    701828 100 pkgutil
     6      1   0       28   0    701856 100 pydoc
     7      1   0       28   0    701884 100 token
     8      1   0       28   0    701912 100 tokenize

>>> h.byrcs
(Se vuelve a quedar colgado el proceso remoto)


TRACEMALLOC: PEP 454, issue 18874
Infraestructura: PEP 445 (pluggable mallocs)

Hay cosas útiles como "tracemalloc.get_object_traceback(obj)" que no están implementadas :-((. Pero tenemos el no documentado "tracemalloc._get_object_trace(obj)". Ojo, el API entre 3.4 y 2.7 es distinta.


El módulo "gc" puede ser útil, pero objetos simples que no pueden participar en ciclos, como cadenas o números, no se tracean.

<Monitor> sc
Remote connection 1. To return to Monitor, type <Ctrl-C> or .<RETURN>
Traceback (most recent call last):
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 394, in run
    self.cmdloop()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/etc/cmd.py", line 137, in cmdloop
    line = self.stdin.readline()
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 54, in readline
    return self.input.readline(size)
  File "/opt/python-pytracemalloc/lib/python2.7/site-packages/guppy/heapy/Remote.py", line 38, in readline
    return self.get(timeout=0.5)
  File "/opt/python-pytracemalloc/lib/python2.7/Queue.py", line 182, in get
    self.not_empty.release()
error: release unlocked lock


Puff

OJO, VER GDB-HEAP.

NECESITO UNA BUENA CONSOLA REMOTA.

COMPROBAR QUE LA INFORMACIÓN QUE DA TRACEMALLOC ES EL NUMERO DE OBJETOS GENERADOS VIVOS, NO EL NUMERO DE OBJETOS GENERADOS EN TOTAL. Verificado haciendo un programa específico.


from rfoo.utils import rconsole
rconsole.spawn_server()

$ /opt/python-pytracemalloc/bin/rconsole

Si en vez de usar el modo remoto para guppy, uso rconsole, tengo cosas interesantes:

>>> sys.getrefcount(configobj.milista)
212

(más tarde)
>>> sys.getrefcount(configobj.milista)
864
>>> h=hp.heap()
>>> h
Partition of a set of 48312 objects. Total size = 7103516 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0   4402   9  3604176  51   3604176  51 unicode
     1  18537  38  1617616  23   5221792  74 str
     2   7892  16   365932   5   5587724  79 tuple
     3    490   1   256760   4   5844484  82 dict of Cheetah.Compiler.MethodCompiler
     4   4611  10   235916   3   6080400  86 list
     5    729   2   225516   3   6305916  89 dict (no owner)
     6   1941   4   139752   2   6445668  91 types.CodeType
     7   1921   4   115260   2   6560928  92 function
     8     77   0    91036   1   6651964  94 dict of module
     9    130   0    72920   1   6724884  95 dict of type
<121 more rows. Type e.g. '_.more' to view.>

Esto tiene mucho más sentido

>>> hp.setref()
(esperamos)
>>> sys.getrefcount(configobj.midict)
40
>>> sys.getrefcount(configobj.milista)
65

Los valores oscilan entre 400 y lo que se ve arriba. Osea, parece que los datos se destruyen apropiadamente.

>>> sys.getrefcount(configobj.milista)
493
>>> sys.getrefcount(configobj.ConfigObj)
16
(otro rato)
>>> sys.getrefcount(configobj.ConfigObj)
30
>>> gc.collect()
5049
>>> sys.getrefcount(configobj.ConfigObj)
16

Hummm, aparentemente hay ciclos, porque la recogida de basuras se los está puliendo.

>>> gc.garbage
[]
>>> sys.getrefcount(configobj.ConfigObj)
9
>>> sys.getrefcount(configobj.milista)
318

(esperamos)

>>> sys.getrefcount(configobj.ConfigObj)
20
>>> gc.collect()
3957
>>> sys.getrefcount(configobj.ConfigObj)
5

Parece claro que se forman ciclos.

>>> gc.get_count()
(1, 4, 1)
>>> gc.get_threshold()
(700, 10, 10)

>>> gc.set_debug(gc.DEBUG_STATS|gc.DEBUG_LEAK)

Esto saca el resultado por el terminal, no por la consola remota que tenemos activada ahora mismo. En el terminal veo cosas como:

gc: collectable <function 0x1ee04b0>

¿Recogida de basuras de funciones? ¿Interesante? La estructura es algo así:

...
gc: collectable <cell 0xb3347730>
gc: collectable <cell 0xb3347750>
gc: collectable <tuple 0xb3352738>
gc: collectable <function 0x22f8930>
...

Esta estructura se repite mucho.

>>> gc.set_debug(0)
>>> len(gc.garbage) 
16267
>>> del gc.garbage[:]
>>> len(gc.garbage) 
0
>>> gc.set_debug(gc.DEBUG_SAVEALL)
>>> gc.collect()
21894
>>> gc.garbage[0]
<guppy.heapy.View.Gchook_type object at 0x2195440>
>>> type(gc.garbage[0])
<class 'guppy.heapy.View.Gchook_type'>
>>> type(gc.garbage[1])
<type 'cell'>
>>> type(gc.garbage[2])
<type 'cell'>
>>> type(gc.garbage[3])
<type 'tuple'>
>>> type(gc.garbage[4])
<type 'function'>
>>> gc.garbage[4]
<function recursive_interpolate at 0x1cc9fb0>
>>> gc.garbage[1]
<cell at 0xb2724f10: function object at 0x1cc9fb0>
>>> gc.garbage[2]
<cell at 0xb2724d70: ConfigParserInterpolation object at 0xb27243f0>
>>> gc.garbage[3]
(<cell at 0xb2724f10: function object at 0x1cc9fb0>,
 <cell at 0xb2724d70: ConfigParserInterpolation object at 0xb27243f0>)

>>> gc.garbage[4].func_name
'configobj'
>>> gc.garbage[4].__name__
'recursive_interpolate'

(unos minutos)
>>> gc.collect()
51317

Se están generando una burrada de ciclos.

La última versión de "configobj" es de febrero 2010 (4.7.2).

Empezamos otra vez desde el principio:

>>> from guppy import hpy
>>> hp=hpy()
>>> h=hp.heap()
>>> h

>>> c = h[0]    (nos quedamos con las cadenas)
>>> c.byrcs
Partition of a set of 55862 objects. Total size = 4234596 bytes.
 Index  Count   %     Size   % Cumulative  % Referrers by Kind (class / dict of class)
     0  20665  37  1686420  40   1686420  40 types.CodeType
     1   2354   4   609596  14   2296016  54 function, tuple
     2   4741   8   485128  11   2781144  66 list
     3   9602  17   403732  10   3184876  75 tuple
     4   2075   4   259748   6   3444624  81 dict of module
     5   4917   9   165848   4   3610472  85 dict (no owner)
     6    737   1   161468   4   3771940  89 dict of type
     7    733   1    47248   1   3819188  90 dict of type, tuple
     8   1108   2    38020   1   3857208  91 dict of module, tuple
     9    490   1    37612   1   3894820  92 dict of Cheetah.Compiler.MethodCompiler
<537 more rows. Type e.g. '_.more' to view.>
>>> r = c.byrcs[0].referrers
Partition of a set of 6950 objects. Total size = 500400 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0   6950 100   500400 100    500400 100 types.CodeType
>>> r.byrcs
Partition of a set of 6950 objects. Total size = 500400 bytes.
 Index  Count   %     Size   % Cumulative  % Referrers by Kind (class / dict of class)
     0   6548  94   471456  94    471456  94 function
     1    332   5    23904   5    495360  99 tuple
     2     42   1     3024   1    498384 100 function, types.FrameType
     3     19   0     1368   0    499752 100 function, tuple
     4      3   0      216   0    499968 100 function, tuple, types.FrameType
     5      2   0      144   0    500112 100 <Nothing>
     6      2   0      144   0    500256 100 types.FrameType
     7      1   0       72   0    500328 100 dict (no owner)
     8      1   0       72   0    500400 100 tuple, types.FrameType
>>> f = r.byrcs[0].nodes
>>> len(f)
6552
>>> f=list(f)
>>> f[0]
<code object Semaphore at 0xe0a020, file "/opt/python-pytracemalloc/lib/python2.7/threading.py", line 411>
>>> f[1234]
<code object truncate at 0xfbfda0, file "/opt/python-pytracemalloc/lib/python2.7/tempfile.py", line 603>
>>> f[3456]
<code object getint at 0x15a5e78, file "/opt/python-pytracemalloc/lib/python2.7/ConfigParser.py", line 358>

>>> hp.setref()
(Esperamos mucho rato)
>>> h = hp.heap()
>>> h
Partition of a set of 454 objects. Total size = 52480 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0     32   7    16768  32     16768  32 dict of weewx.accum.ScalarStats
     1     87  19    11864  23     28632  55 str
     2      9   2     5100  10     33732  64 dict (no owner)
     3     10   2     4236   8     37968  72 types.FrameType
     4     53  12     2332   4     40300  77 __builtin__.weakref
     5    119  26     1904   4     42204  80 float
     6      5   1     1620   3     43824  84 list
     7     25   6     1072   2     44896  86 tuple
     8      2   0     1064   2     45960  88 weewx.accum.WXAccum
     9      2   0     1048   2     47008  90 dict of weewx.accum.VecStats
<27 more rows. Type e.g. '_.more' to view.>


root@MeteoPI:/tmp# /opt/python-pytracemalloc/bin/python -m tracemalloc -l  trace-29214-2013-12-11-18:55:31.pickle trace-29214-2013-12-11-19:10:02.pickle
tracemalloc 0.9.1

2013-12-11 18:55:31: Top 10 allocations per file and line
#1: .../guppy/heapy/Classifiers.py:34: size=2047 KiB, count=123, average=16 KiB
#2: .../python2.7/site-packages/configobj.py:532: size=1053 KiB, count=5598, average=192 B
#3: .../site-packages/Cheetah/Compiler.py:1935: size=989 KiB, count=31, average=31 KiB
#4: .../site-packages/Cheetah/Compiler.py:1454: size=918 KiB, count=30, average=30 KiB
#5: .../site-packages/Cheetah/Compiler.py:363: size=903 KiB, count=999, average=925 B
#6: .../guppy/heapy/View.py:488: size=818 KiB, count=2, average=409 KiB
#7: .../bin/weedb/sqlite.py:113: size=771 KiB, count=16874, average=46 B
#8: .../python2.7/site-packages/configobj.py:604: size=745 KiB, count=7716, average=98 B
#9: .../guppy/heapy/ImpSet.py:22: size=727 KiB, count=121, average=6 KiB
#10: .../site-packages/Cheetah/Template.py:787: size=625 KiB, count=9724, average=65 B
6732 more: size=11 MiB, count=158027, average=78 B
Total Python memory: size=21 MiB, count=199245, average=111 B
Total process memory: size=57 MiB (ignore tracemalloc: 19 KiB)

2013-12-11 19:10:02: Top 10 allocations per file and line (compared to 2013-12-11 18:55:31)
#1: .../guppy/heapy/View.py:362: size=400 KiB (+400 KiB), count=74 (+72), average=5 KiB
#2: .../python2.7/site-packages/configobj.py:532: size=1230 KiB (+176 KiB), count=6482 (+884), average=194 B
#3: .../python2.7/site-packages/configobj.py:604: size=827 KiB (+81 KiB), count=8597 (+881), average=98 B
#4: .../python2.7/site-packages/configobj.py:605: size=532 KiB (+54 KiB), count=550 (+63), average=991 B
#5: .../python2.7/site-packages/configobj.py:534: size=269 KiB (+51 KiB), count=1866 (+358), average=148 B
#6: .../python2.7/site-packages/configobj.py:641: size=462 KiB (+49 KiB), count=418 (+47), average=1133 B
#7: .../python2.7/site-packages/configobj.py:535: size=323 KiB (+36 KiB), count=2428 (+276), average=136 B
#8: .../python2.7/site-packages/configobj.py:1319: size=460 KiB (+36 KiB), count=8034 (+647), average=58 B
#9: .../guppy/heapy/Classifiers.py:34: size=2083 KiB (+35 KiB), count=164 (+41), average=12 KiB
#10: .../python2.7/site-packages/configobj.py:513: size=311 KiB (+35 KiB), count=2336 (+266), average=136 B
6803 more: size=15 MiB (+367 KiB), count=178985 (+7154), average=92 B
Total Python memory: size=22 MiB (+1325 KiB), count=209934 (+10689), average=112 B
Total process memory: size=63 MiB (+5 MiB) (ignore tracemalloc: 19 KiB)

2 snapshots



>>> gc.collect()
2095

tracemalloc sigue indicando crecimiento. 





Python Zope ©2013-2014 jcea@jcea.es

Más información sobre los OpenBadges

Donación BitCoin: 19niBN42ac2pqDQFx6GJZxry2JQSFvwAfS