Psycopg execute file




















We must stress this point:. Not even at gunpoint. The correct way to pass variables in a SQL command is using the second argument of the execute method:.

Many standard Python types are adapted into SQL and returned as Python objects when a query is executed. You can also find a few other specialized adapters in the psycopg2. Python numeric objects int , long , float , Decimal are converted into a PostgreSQL numerical representation:. Sometimes you may prefer to receive numeric data as float instead, for performance reason or ease of manipulation: you can configure an adapter to cast PostgreSQL numeric to Python float.

This of course may imply a loss of precision. PostgreSQL numeric types. Python str and unicode are converted into the SQL string syntax. Data is usually received as str i. However it is possible to receive unicode on Python 2 too: see Unicode handling. Python unicode objects are automatically encoded in the client encoding defined on the database connection the PostgreSQL encoding , available in connection. When reading data from the database, in Python 2 the strings returned are usually 8 bit str objects encoded in the database client encoding:.

In Python 3 instead the strings are automatically decoded in the connection encoding , as the str object can represent Unicode characters. In Python 2 you must register a typecaster in order to receive unicode objects:.

In Python 2, if you want to uniformly receive all your database input in Unicode, you can register the related typecasters globally as soon as Psycopg is imported:. In some cases, on Python 3, you may want to receive bytes instead of str , without undergoing to any decoding. This is especially the case if the data in the database is in mixed encoding. Python types representing binary objects are converted into PostgreSQL binary string syntax, suitable for bytea fields.

Any object implementing the Revised Buffer Protocol should be usable as binary type. Received data is returned as buffer in Python 2 or memoryview in Python 3. Changed in version 2. In Python 2, if you have binary data in a str object, you can pass them to a bytea field using the psycopg2. Binary wrapper:. Since version 9. Starting from Psycopg 2.

If you use a previous version you will need some extra care when receiving bytea from PostgreSQL: you must have at least libpq 9. Time zones are supported too. The PostgreSQL type timestamp with time zone a. Before Python 3. A few historical time zones had seconds in the UTC offset: these time zones will have the offset rounded to the nearest minute, with an error of up to 30 seconds, on Python versions before 3.

Previously such timezones raised an error. Infinite dates are not available to Python, so these objects are mapped to date.

Unfortunately the mapping cannot be bidirectional so these dates will be stored back into the database with their values, such as It is possible to create an alternative adapter for dates and other objects to map date.

Of course it will not be possible to write the value of date. Retrieving a value of results in a time of Reading back from PostgreSQL, arrays are converted to lists of Python objects as expected, but only if the items are of a known type. Arrays of unknown types are returned as represented by the database e. If you want to convert the items into Python objects you can easily create a typecaster for array of unknown types. Python tuples are converted into a syntax suitable for the SQL IN operator and to represent a composite type:.

Alternatively you can use a Python list. New in version 2. In previous releases it was necessary to import the extensions module to have it registered. In Psycopg transactions are handled by the connection class. By default, the first time a command is sent to the database using one of the cursor s created by the connection , a new transaction is created. I have also been running more test variants. I thought the leak would be more closely associated with the number of row than with the total amount of bytes.

Could it be that the leak happens for every "chunk" of bytes from the connection and not for every row? I would have been surprised about the opposite. This seems a problem with your machine network stack, not with postgres and its client.

I just just used. I would have been surprised if it was a PscyoPG or Python problem. Thought it was worth noting, though. I will try to investigate the numbers of rows being submitted in each transaction block. Previously I had said that increasing the maxfile limit to seems to have helped. However after a restart I neglected to increase the limit again so the system default of was reinstated and things were working fine even with pending commits in the order of rows.

Then, for no apparent reason, the fault reappeared. Thank you obi The second error is caused by the transaction context failing to detect that the connection is not in a state to receive a rollback: see If you try your code without that block you should see only the first traceback.

Reverted to old code that worked on PsycoPG2 ie without connection or transaction contexts. It may seem obvious, but there are far far fewer open files now. On my system it holds at a steady 36 files, regardless of number of uncommitted transactions. Thank you obi , that's a clean repro. Lines 55 to 60 in ed4b. Could you please add a couple of prints to take a look at whether, for some reason, we are registering but not unregistering? The only reason I can see would be a StopIteration thrown in the middle but that should happen only once If you verify that the calls are well paired I will ask some Python dev to take a look.

In my tests they calls seemed perfectly balanced I also tested replacing DefaultSelector with SelectSelector here:. Lines 52 to 60 in ed4b. Hi qwesda. Uhm, maybe we have read the wrong thing all this time? Looking at your original stacktrace:. Looking better at these objects, they support a close method to release resources, as well as a context block. Could you please check if the changeset ba fixes the problem?

Could you please check if with the changeset ba fixes the problem? I tested it up to 15GB and the kqueue count oscillates between 1 and 0.

Skip to content. Star New issue. Written by Data Pilot. Have a Database Problem? Pilot the ObjectRocket Platform Free! Get Started. Related Topics:. Keep in the know! Platform Pricing Cost of Ownership.



0コメント

  • 1000 / 1000