Package Gnumed :: Package pycommon :: Module gmBusinessDBObject
[frames] | no frames]

Source Code for Module Gnumed.pycommon.gmBusinessDBObject

  1   
  2   
  3   
  4  __doc__ = """GNUmed database object business class. 
  5   
  6  Overview 
  7  -------- 
  8  This class wraps a source relation (table, view) which 
  9  represents an entity that makes immediate business sense 
 10  such as a vaccination or a medical document. In many if 
 11  not most cases this source relation is a denormalizing 
 12  view. The data in that view will in most cases, however, 
 13  originate from several normalized tables. One instance 
 14  of this class represents one row of said source relation. 
 15   
 16  Note, however, that this class does not *always* simply 
 17  wrap a single table or view. It can also encompass several 
 18  relations (views, tables, sequences etc) that taken together 
 19  form an object meaningful to *business* logic. 
 20   
 21  Initialization 
 22  -------------- 
 23  There are two ways to initialize an instance with values. 
 24  One way is to pass a "primary key equivalent" object into 
 25  __init__(). Refetch_payload() will then pull the data from 
 26  the backend. Another way would be to fetch the data outside 
 27  the instance and pass it in via the <row> argument. In that 
 28  case the instance will not initially connect to the databse 
 29  which may offer a great boost to performance. 
 30   
 31  Values API 
 32  ---------- 
 33  Field values are cached for later access. They can be accessed 
 34  by a dictionary API, eg: 
 35   
 36          old_value = object['field'] 
 37          object['field'] = new_value 
 38   
 39  The field names correspond to the respective column names 
 40  in the "main" source relation. Accessing non-existant field 
 41  names will raise an error, so does trying to set fields not 
 42  listed in self.__class__._updatable_fields. To actually 
 43  store updated values in the database one must explicitly 
 44  call save_payload(). 
 45   
 46  The class will in many cases be enhanced by accessors to 
 47  related data that is not directly part of the business 
 48  object itself but are closely related, such as codes 
 49  linked to a clinical narrative entry (eg a diagnosis). Such 
 50  accessors in most cases start with get_*. Related setters 
 51  start with set_*. The values can be accessed via the 
 52  object['field'] syntax, too, but they will be cached 
 53  independantly. 
 54   
 55  Concurrency handling 
 56  -------------------- 
 57  GNUmed connections always run transactions in isolation level 
 58  "serializable". This prevents transactions happening at the 
 59  *very same time* to overwrite each other's data. All but one 
 60  of them will abort with a concurrency error (eg if a 
 61  transaction runs a select-for-update later than another one 
 62  it will hang until the first transaction ends. Then it will 
 63  succeed or fail depending on what the first transaction 
 64  did). This is standard transactional behaviour. 
 65   
 66  However, another transaction may have updated our row 
 67  between the time we first fetched the data and the time we 
 68  start the update transaction. This is noticed by getting the 
 69  XMIN system column for the row when initially fetching the 
 70  data and using that value as a where condition value when 
 71  updating the row later. If the row had been updated (xmin 
 72  changed) or deleted (primary key disappeared) in the 
 73  meantime the update will touch zero rows (as no row with 
 74  both PK and XMIN matching is found) even if the query itself 
 75  syntactically succeeds. 
 76   
 77  When detecting a change in a row due to XMIN being different 
 78  one needs to be careful how to represent that to the user. 
 79  The row may simply have changed but it also might have been 
 80  deleted and a completely new and unrelated row which happens 
 81  to have the same primary key might have been created ! This 
 82  row might relate to a totally different context (eg. patient, 
 83  episode, encounter). 
 84   
 85  One can offer all the data to the user: 
 86   
 87  self.payload_most_recently_fetched 
 88  - contains the data at the last successful refetch 
 89   
 90  self.payload_most_recently_attempted_to_store 
 91  - contains the modified payload just before the last 
 92    failure of save_payload() - IOW what is currently 
 93    in the database 
 94   
 95  self._payload 
 96  - contains the currently active payload which may or 
 97    may not contain changes 
 98   
 99  For discussion on this see the thread starting at: 
100   
101          http://archives.postgresql.org/pgsql-general/2004-10/msg01352.php 
102   
103  and here 
104   
105          http://groups.google.com/group/pgsql.general/browse_thread/thread/e3566ba76173d0bf/6cf3c243a86d9233 
106          (google for "XMIN semantic at peril") 
107   
108  Problem cases with XMIN: 
109   
110  1) not unlikely 
111  - a very old row is read with XMIN 
112  - vacuum comes along and sets XMIN to FrozenTransactionId 
113    - now XMIN changed but the row actually didn't ! 
114  - an update with "... where xmin = old_xmin ..." fails 
115    although there is no need to fail 
116   
117  2) quite unlikely 
118  - a row is read with XMIN 
119  - a long time passes 
120  - the original XMIN gets frozen to FrozenTransactionId 
121  - another writer comes along and changes the row 
122  - incidentally the exact same old row gets the old XMIN *again* 
123    - now XMIN is (again) the same but the data changed ! 
124  - a later update fails to detect the concurrent change !! 
125   
126  TODO: 
127  The solution is to use our own column for optimistic locking 
128  which gets updated by an AFTER UPDATE trigger. 
129  """ 
130  #============================================================ 
131  __author__ = "K.Hilbert <Karsten.Hilbert@gmx.net>" 
132  __license__ = "GPL v2 or later" 
133   
134   
135  import sys 
136  import types 
137  import inspect 
138  import logging 
139  import datetime 
140   
141   
142  if __name__ == '__main__': 
143          sys.path.insert(0, '../../') 
144  from Gnumed.pycommon import gmExceptions 
145  from Gnumed.pycommon import gmPG2 
146  from Gnumed.pycommon.gmDateTime import pydt_strftime 
147  from Gnumed.pycommon.gmTools import tex_escape_string 
148  from Gnumed.pycommon.gmTools import xetex_escape_string 
149  from Gnumed.pycommon.gmTools import compare_dict_likes 
150  from Gnumed.pycommon.gmTools import format_dict_like 
151  from Gnumed.pycommon.gmTools import format_dict_likes_comparison 
152   
153   
154  _log = logging.getLogger('gm.db') 
155  #============================================================ 
156 -class cBusinessDBObject(object):
157 """Represents business objects in the database. 158 159 Rules: 160 - instances ARE ASSUMED TO EXIST in the database 161 - PK construction (aPK_obj): DOES verify its existence on instantiation 162 (fetching data fails) 163 - Row construction (row): allowed by using a dict of pairs 164 field name: field value (PERFORMANCE improvement) 165 - does NOT verify FK target existence 166 - does NOT create new entries in the database 167 - does NOT lazy-fetch fields on access 168 169 Class scope SQL commands and variables: 170 171 <_cmd_fetch_payload> 172 - must return exactly one row 173 - where clause argument values are expected 174 in self.pk_obj (taken from __init__(aPK_obj)) 175 - must return xmin of all rows that _cmds_store_payload 176 will be updating, so views must support the xmin columns 177 of their underlying tables 178 179 <_cmds_store_payload> 180 - one or multiple "update ... set ... where xmin_* = ... and pk* = ..." 181 statements which actually update the database from the data in self._payload, 182 - the last query must refetch at least the XMIN values needed to detect 183 concurrent updates, their field names had better be the same as 184 in _cmd_fetch_payload, 185 - the last query CAN return other fields which is particularly 186 useful when those other fields are computed in the backend 187 and may thus change upon save but will not have been set by 188 the client code explicitely - this is only really of concern 189 if the saved subclass is to be reused after saving rather 190 than re-instantiated 191 - when subclasses tend to live a while after save_payload() was 192 called and they support computed fields (say, _(some_column) 193 you need to return *all* columns (see cEncounter) 194 195 <_updatable_fields> 196 - a list of fields available for update via object['field'] 197 198 199 A template for new child classes: 200 201 *********** start of template *********** 202 203 #------------------------------------------------------------ 204 from Gnumed.pycommon import gmBusinessDBObject 205 from Gnumed.pycommon import gmPG2 206 207 #============================================================ 208 # short description 209 #------------------------------------------------------------ 210 # search/replace "" " -> 3 "s 211 # 212 # search-replace get_XXX, use plural form 213 _SQL_get_XXX = u"" " 214 SELECT *, (xmin AS xmin_XXX) 215 FROM XXX.v_XXX 216 WHERE %s 217 "" " 218 219 class cXxxXxx(gmBusinessDBObject.cBusinessDBObject): 220 "" "Represents ..."" " 221 222 _cmd_fetch_payload = _SQL_get_XXX % u"pk_XXX = %s" 223 _cmds_store_payload = [ 224 u"" " 225 -- typically the underlying table name 226 UPDATE xxx.xxx SET 227 -- typically "table_col = %(view_col)s" 228 xxx = %(xxx)s, 229 xxx = gm.nullify_empty_string(%(xxx)s) 230 WHERE 231 pk = %(pk_XXX)s 232 AND 233 xmin = %(xmin_XXX)s 234 RETURNING 235 xmin AS xmin_XXX 236 -- also return columns which are calculated in the view used by 237 -- the initial SELECT such that they will further on contain their 238 -- updated value: 239 --, ... 240 --, ... 241 "" " 242 ] 243 # view columns that can be updated: 244 _updatable_fields = [ 245 u'xxx', 246 u'xxx' 247 ] 248 #-------------------------------------------------------- 249 # def format(self): 250 # return u'%s' % self 251 252 #------------------------------------------------------------ 253 def get_XXX(order_by=None): 254 if order_by is None: 255 order_by = u'true' 256 else: 257 order_by = u'true ORDER BY %s' % order_by 258 259 cmd = _SQL_get_XXX % order_by 260 rows, idx = gmPG2.run_ro_queries(queries = [{'cmd': cmd}], get_col_idx = True) 261 return [ cXxxXxx(row = {'data': r, 'idx': idx, 'pk_field': 'pk_XXX'}) for r in rows ] 262 #------------------------------------------------------------ 263 def create_xxx(xxx=None, xxx=None): 264 265 args = { 266 u'xxx': xxx, 267 u'xxx': xxx 268 } 269 cmd = u"" " 270 INSERT INTO xxx.xxx ( 271 xxx, 272 xxx, 273 xxx 274 ) VALUES ( 275 %(xxx)s, 276 %(xxx)s, 277 gm.nullify_empty_string(%(xxx)s) 278 ) 279 RETURNING pk 280 --RETURNING * 281 "" " 282 rows, idx = gmPG2.run_rw_queries(queries = [{'cmd': cmd, 'args': args}], return_data = True, get_col_idx = False) 283 #rows, idx = gmPG2.run_rw_queries(queries = [{'cmd': cmd, 'args': args}], return_data = True, get_col_idx = True) 284 285 return cXxxXxx(aPK_obj = rows[0]['pk']) 286 #return cXxxXxx(row = {'data': r, 'idx': idx, 'pk_field': 'pk_XXX'}) 287 #------------------------------------------------------------ 288 def delete_xxx(pk_XXX=None): 289 args = {'pk': pk_XXX} 290 cmd = u"DELETE FROM xxx.xxx WHERE pk = %(pk)s" 291 gmPG2.run_rw_queries(queries = [{'cmd': cmd, 'args': args}]) 292 return True 293 #------------------------------------------------------------ 294 295 *********** end of template *********** 296 297 """ 298 #--------------------------------------------------------
299 - def __init__(self, aPK_obj=None, row=None, link_obj=None):
300 """Init business object. 301 302 Call from child classes: 303 304 super(cChildClass, self).__init__(aPK_obj = aPK_obj, row = row, link_obj = link_obj) 305 """ 306 # initialize those "too early" because checking descendants might 307 # fail which will then call __str__ in stack trace logging if --debug 308 # was given which in turn needs those instance variables 309 self.pk_obj = '<uninitialized>' 310 self._idx = {} 311 self._payload = [] # the cache for backend object values (mainly table fields) 312 self._ext_cache = {} # the cache for extended method's results 313 self._is_modified = False 314 315 # check descendants 316 self.__class__._cmd_fetch_payload 317 self.__class__._cmds_store_payload 318 self.__class__._updatable_fields 319 320 if aPK_obj is not None: 321 self.__init_from_pk(aPK_obj = aPK_obj, link_obj = link_obj) 322 else: 323 self._init_from_row_data(row=row) 324 325 self._is_modified = False
326 327 #--------------------------------------------------------
328 - def __init_from_pk(self, aPK_obj=None, link_obj=None):
329 """Creates a new clinical item instance by its PK. 330 331 aPK_obj can be: 332 - a simple value 333 * the primary key WHERE condition must be 334 a simple column 335 - a dictionary of values 336 * the primary key where condition must be a 337 subselect consuming the dict and producing 338 the single-value primary key 339 """ 340 self.pk_obj = aPK_obj 341 result = self.refetch_payload(link_obj = link_obj) 342 if result is True: 343 self.payload_most_recently_fetched = {} 344 for field in self._idx.keys(): 345 self.payload_most_recently_fetched[field] = self._payload[self._idx[field]] 346 return True 347 348 if result is False: 349 raise gmExceptions.ConstructorError("[%s:%s]: error loading instance" % (self.__class__.__name__, self.pk_obj))
350 351 #--------------------------------------------------------
352 - def _init_from_row_data(self, row=None):
353 """Creates a new clinical item instance given its fields. 354 355 row must be a dict with the fields: 356 - pk_field: the name of the primary key field 357 - idx: a dict mapping field names to position 358 - data: the field values in a list (as returned by 359 cursor.fetchone() in the DB-API) 360 361 row = {'data': rows[0], 'idx': idx, 'pk_field': 'pk_XXX (the PK column name)'} 362 363 rows, idx = gmPG2.run_ro_queries(queries = [{'cmd': cmd, 'args': args}], get_col_idx = True) 364 objects = [ cChildClass(row = {'data': r, 'idx': idx, 'pk_field': 'the PK column name'}) for r in rows ] 365 """ 366 try: 367 self._idx = row['idx'] 368 self._payload = row['data'] 369 self.pk_obj = self._payload[self._idx[row['pk_field']]] 370 except: 371 _log.exception('faulty <row> argument structure: %s' % row) 372 raise gmExceptions.ConstructorError("[%s:??]: error loading instance from row data" % self.__class__.__name__) 373 374 if len(self._idx.keys()) != len(self._payload): 375 _log.critical('field index vs. payload length mismatch: %s field names vs. %s fields' % (len(self._idx.keys()), len(self._payload))) 376 _log.critical('faulty <row> argument structure: %s' % row) 377 raise gmExceptions.ConstructorError("[%s:??]: error loading instance from row data" % self.__class__.__name__) 378 379 self.payload_most_recently_fetched = {} 380 for field in self._idx.keys(): 381 self.payload_most_recently_fetched[field] = self._payload[self._idx[field]]
382 383 #--------------------------------------------------------
384 - def __del__(self):
385 if '_is_modified' in self.__dict__: 386 if self._is_modified: 387 _log.critical('[%s:%s]: loosing payload changes' % (self.__class__.__name__, self.pk_obj)) 388 _log.debug('most recently fetched: %s' % self.payload_most_recently_fetched) 389 _log.debug('modified: %s' % self._payload)
390 391 #--------------------------------------------------------
392 - def __str__(self):
393 lines = [] 394 try: 395 for attr in self._idx.keys(): 396 if self._payload[self._idx[attr]] is None: 397 lines.append('%s: NULL' % attr) 398 else: 399 lines.append('%s: %s [%s]' % ( 400 attr, 401 self._payload[self._idx[attr]], 402 type(self._payload[self._idx[attr]]) 403 )) 404 return '[%s:%s]:\n%s' % (self.__class__.__name__, self.pk_obj, '\n'.join(lines)) 405 except: 406 return 'likely nascent [%s @ %s], cannot show payload and primary key' %(self.__class__.__name__, id(self))
407 408 #--------------------------------------------------------
409 - def __getitem__(self, attribute):
410 # use try: except: as it is faster and we want this as fast as possible 411 412 # 1) backend payload cache 413 try: 414 return self._payload[self._idx[attribute]] 415 except KeyError: 416 pass 417 418 # 2) extension method results ... 419 getter = getattr(self, 'get_%s' % attribute, None) 420 if not callable(getter): 421 _log.warning('[%s]: no attribute [%s]' % (self.__class__.__name__, attribute)) 422 _log.warning('[%s]: valid attributes: %s' % (self.__class__.__name__, str(self._idx.keys()))) 423 _log.warning('[%s]: no getter method [get_%s]' % (self.__class__.__name__, attribute)) 424 methods = [ m for m in inspect.getmembers(self, inspect.ismethod) if m[0].startswith('get_') ] 425 _log.warning('[%s]: valid getter methods: %s' % (self.__class__.__name__, str(methods))) 426 raise KeyError('[%s]: cannot read from key [%s]' % (self.__class__.__name__, attribute)) 427 428 self._ext_cache[attribute] = getter() 429 return self._ext_cache[attribute]
430 #--------------------------------------------------------
431 - def __setitem__(self, attribute, value):
432 433 # 1) backend payload cache 434 if attribute in self.__class__._updatable_fields: 435 try: 436 if self._payload[self._idx[attribute]] != value: 437 self._payload[self._idx[attribute]] = value 438 self._is_modified = True 439 return 440 except KeyError: 441 _log.warning('[%s]: cannot set attribute <%s> despite marked settable' % (self.__class__.__name__, attribute)) 442 _log.warning('[%s]: supposedly settable attributes: %s' % (self.__class__.__name__, str(self.__class__._updatable_fields))) 443 raise KeyError('[%s]: cannot write to key [%s]' % (self.__class__.__name__, attribute)) 444 445 # 2) setters providing extensions 446 if hasattr(self, 'set_%s' % attribute): 447 setter = getattr(self, "set_%s" % attribute) 448 if not callable(setter): 449 raise AttributeError('[%s] setter [set_%s] not callable' % (self.__class__.__name__, attribute)) 450 try: 451 del self._ext_cache[attribute] 452 except KeyError: 453 pass 454 if type(value) == tuple: 455 if setter(*value): 456 self._is_modified = True 457 return 458 raise AttributeError('[%s]: setter [%s] failed for [%s]' % (self.__class__.__name__, setter, value)) 459 if setter(value): 460 self._is_modified = True 461 return 462 463 # 3) don't know what to do with <attribute> 464 _log.error('[%s]: cannot find attribute <%s> or setter method [set_%s]' % (self.__class__.__name__, attribute, attribute)) 465 _log.warning('[%s]: settable attributes: %s' % (self.__class__.__name__, str(self.__class__._updatable_fields))) 466 methods = [ m for m in inspect.getmembers(self, inspect.ismethod) if m[0].startswith('set_') ] 467 _log.warning('[%s]: valid setter methods: %s' % (self.__class__.__name__, str(methods))) 468 raise AttributeError('[%s]: cannot set [%s]' % (self.__class__.__name__, attribute))
469 470 #-------------------------------------------------------- 471 # external API 472 #--------------------------------------------------------
473 - def same_payload(self, another_object=None):
474 raise NotImplementedError('comparison between [%s] and [%s] not implemented' % (self, another_object))
475 #--------------------------------------------------------
476 - def is_modified(self):
477 return self._is_modified
478 479 #--------------------------------------------------------
480 - def get_fields(self):
481 try: 482 return self._idx.keys() 483 except AttributeError: 484 return 'nascent [%s @ %s], cannot return keys' %(self.__class__.__name__, id(self))
485 486 #--------------------------------------------------------
487 - def get_updatable_fields(self):
488 return self.__class__._updatable_fields
489 490 #--------------------------------------------------------
491 - def fields_as_dict(self, date_format='%Y %b %d %H:%M', none_string='', escape_style=None, bool_strings=None):
492 if bool_strings is None: 493 bools = {True: 'True', False: 'False'} 494 else: 495 bools = {True: bool_strings[0], False: bool_strings[1]} 496 data = {} 497 for field in self._idx.keys(): 498 # FIXME: harden against BYTEA fields 499 #if type(self._payload[self._idx[field]]) == ... 500 # data[field] = _('<%s bytes of binary data>') % len(self._payload[self._idx[field]]) 501 # continue 502 val = self._payload[self._idx[field]] 503 if val is None: 504 data[field] = none_string 505 continue 506 if isinstance(val, bool): 507 data[field] = bools[val] 508 continue 509 510 if isinstance(val, datetime.datetime): 511 if date_format is None: 512 data[field] = val 513 continue 514 data[field] = pydt_strftime(val, format = date_format) 515 if escape_style in ['latex', 'tex']: 516 data[field] = tex_escape_string(data[field]) 517 elif escape_style in ['xetex', 'xelatex']: 518 data[field] = xetex_escape_string(data[field]) 519 continue 520 521 try: 522 data[field] = str(val, encoding = 'utf8', errors = 'replace') 523 except TypeError: 524 try: 525 data[field] = str(val) 526 except (UnicodeDecodeError, TypeError): 527 val = '%s' % str(val) 528 data[field] = val.decode('utf8', 'replace') 529 if escape_style in ['latex', 'tex']: 530 data[field] = tex_escape_string(data[field]) 531 elif escape_style in ['xetex', 'xelatex']: 532 data[field] = xetex_escape_string(data[field]) 533 534 return data
535 #--------------------------------------------------------
536 - def get_patient(self):
537 _log.error('[%s:%s]: forgot to override get_patient()' % (self.__class__.__name__, self.pk_obj)) 538 return None
539 540 #--------------------------------------------------------
541 - def format(self, *args, **kwargs):
542 return format_dict_like ( 543 self.fields_as_dict(none_string = '<?>'), 544 tabular = True, 545 value_delimiters = None 546 ).split('\n')
547 548 #--------------------------------------------------------
549 - def _get_revision_history(self, query, args, title):
550 rows, idx = gmPG2.run_ro_queries(queries = [{'cmd': query, 'args': args}], get_col_idx = True) 551 lines = [] 552 lines.append('%s (%s versions)' % (title, rows[0]['row_version'] + 1)) 553 if len(rows) == 1: 554 lines.append('') 555 lines.extend(format_dict_like ( 556 rows[0], 557 left_margin = 1, 558 tabular = True, 559 value_delimiters = None, 560 eol = None 561 )) 562 return lines 563 564 for row_idx in range(len(rows)-1): 565 lines.append('') 566 row_older = rows[row_idx + 1] 567 row_newer = rows[row_idx] 568 lines.extend(format_dict_likes_comparison ( 569 row_older, 570 row_newer, 571 title_left = _('Revision #%s') % row_older['row_version'], 572 title_right = _('Revision #%s') % row_newer['row_version'], 573 left_margin = 0, 574 key_delim = ' | ', 575 data_delim = ' | ', 576 missing_string = '', 577 ignore_diff_in_keys = ['audit__action_applied', 'audit__action_when', 'audit__action_by', 'pk_audit', 'row_version', 'modified_when', 'modified_by'] 578 )) 579 return lines
580 581 #--------------------------------------------------------
582 - def refetch_payload(self, ignore_changes=False, link_obj=None):
583 """Fetch field values from backend. 584 """ 585 if self._is_modified: 586 compare_dict_likes(self.original_payload, self.fields_as_dict(date_format = None, none_string = None), 'original payload', 'modified payload') 587 if ignore_changes: 588 _log.critical('[%s:%s]: loosing payload changes' % (self.__class__.__name__, self.pk_obj)) 589 #_log.debug('most recently fetched: %s' % self.payload_most_recently_fetched) 590 #_log.debug('modified: %s' % self._payload) 591 else: 592 _log.critical('[%s:%s]: cannot reload, payload changed' % (self.__class__.__name__, self.pk_obj)) 593 return False 594 595 if type(self.pk_obj) == dict: 596 arg = self.pk_obj 597 else: 598 arg = [self.pk_obj] 599 rows, self._idx = gmPG2.run_ro_queries ( 600 link_obj = link_obj, 601 queries = [{'cmd': self.__class__._cmd_fetch_payload, 'args': arg}], 602 get_col_idx = True 603 ) 604 if len(rows) == 0: 605 _log.error('[%s:%s]: no such instance' % (self.__class__.__name__, self.pk_obj)) 606 return False 607 self._payload = rows[0] 608 return True
609 610 #--------------------------------------------------------
611 - def __noop(self):
612 pass
613 614 #--------------------------------------------------------
615 - def save(self, conn=None):
616 return self.save_payload(conn = conn)
617 618 #--------------------------------------------------------
619 - def save_payload(self, conn=None):
620 """Store updated values (if any) in database. 621 622 Optionally accepts a pre-existing connection 623 - returns a tuple (<True|False>, <data>) 624 - True: success 625 - False: an error occurred 626 * data is (error, message) 627 * for error meanings see gmPG2.run_rw_queries() 628 """ 629 if not self._is_modified: 630 return (True, None) 631 632 args = {} 633 for field in self._idx.keys(): 634 args[field] = self._payload[self._idx[field]] 635 self.payload_most_recently_attempted_to_store = args 636 637 close_conn = self.__noop 638 if conn is None: 639 conn = gmPG2.get_connection(readonly=False) 640 close_conn = conn.close 641 642 queries = [] 643 for query in self.__class__._cmds_store_payload: 644 queries.append({'cmd': query, 'args': args}) 645 rows, idx = gmPG2.run_rw_queries ( 646 link_obj = conn, 647 queries = queries, 648 return_data = True, 649 get_col_idx = True 650 ) 651 652 # success ? 653 if len(rows) == 0: 654 # nothing updated - this can happen if: 655 # - someone else updated the row so XMIN does not match anymore 656 # - the PK went away (rows were deleted from under us) 657 # - another WHERE condition of the UPDATE did not produce any rows to update 658 # - savepoints are used since subtransactions may relevantly change the xmin/xmax ... 659 return (False, ('cannot update row', _('[%s:%s]: row not updated (nothing returned), row in use ?') % (self.__class__.__name__, self.pk_obj))) 660 661 # update cached values from should-be-first-and-only 662 # result row of last query, 663 # update all fields returned such that computed 664 # columns see their new values (given they are 665 # returned by the query) 666 row = rows[0] 667 for key in idx: 668 try: 669 self._payload[self._idx[key]] = row[idx[key]] 670 except KeyError: 671 conn.rollback() 672 close_conn() 673 _log.error('[%s:%s]: cannot update instance, XMIN refetch key mismatch on [%s]' % (self.__class__.__name__, self.pk_obj, key)) 674 _log.error('payload keys: %s' % str(self._idx)) 675 _log.error('XMIN refetch keys: %s' % str(idx)) 676 _log.error(args) 677 raise 678 679 # only at conn.commit() time will data actually 680 # get committed (and thusly trigger based notifications 681 # be sent out), so reset the local modification flag 682 # right before that 683 self._is_modified = False 684 conn.commit() 685 close_conn() 686 687 # update to new "original" payload 688 self.payload_most_recently_fetched = {} 689 for field in self._idx.keys(): 690 self.payload_most_recently_fetched[field] = self._payload[self._idx[field]] 691 692 return (True, None)
693 694 #============================================================
695 -def jsonclasshintify(obj):
696 # this should eventually be somewhere else 697 """ turn the data into a list of dicts, adding "class hints". 698 all objects get turned into dictionaries which the other end 699 will interpret as "object", via the __jsonclass__ hint, 700 as specified by the JSONRPC protocol standard. 701 """ 702 if isinstance(obj, list): 703 return map(jsonclasshintify, obj) 704 elif isinstance(obj, gmPG2.dbapi.tz.FixedOffsetTimezone): 705 # this will get decoded as "from jsonobjproxy import {clsname}" 706 # at the remote (client) end. 707 res = {'__jsonclass__': ["jsonobjproxy.FixedOffsetTimezone"]} 708 res['name'] = obj._name 709 res['offset'] = jsonclasshintify(obj._offset) 710 return res 711 elif isinstance(obj, datetime.timedelta): 712 # this will get decoded as "from jsonobjproxy import {clsname}" 713 # at the remote (client) end. 714 res = {'__jsonclass__': ["jsonobjproxy.TimeDelta"]} 715 res['days'] = obj.days 716 res['seconds'] = obj.seconds 717 res['microseconds'] = obj.microseconds 718 return res 719 elif isinstance(obj, datetime.time): 720 # this will get decoded as "from jsonobjproxy import {clsname}" 721 # at the remote (client) end. 722 res = {'__jsonclass__': ["jsonobjproxy.Time"]} 723 res['hour'] = obj.hour 724 res['minute'] = obj.minute 725 res['second'] = obj.second 726 res['microsecond'] = obj.microsecond 727 res['tzinfo'] = jsonclasshintify(obj.tzinfo) 728 return res 729 elif isinstance(obj, datetime.datetime): 730 # this will get decoded as "from jsonobjproxy import {clsname}" 731 # at the remote (client) end. 732 res = {'__jsonclass__': ["jsonobjproxy.DateTime"]} 733 res['year'] = obj.year 734 res['month'] = obj.month 735 res['day'] = obj.day 736 res['hour'] = obj.hour 737 res['minute'] = obj.minute 738 res['second'] = obj.second 739 res['microsecond'] = obj.microsecond 740 res['tzinfo'] = jsonclasshintify(obj.tzinfo) 741 return res 742 elif isinstance(obj, cBusinessDBObject): 743 # this will get decoded as "from jsonobjproxy import {clsname}" 744 # at the remote (client) end. 745 res = {'__jsonclass__': ["jsonobjproxy.%s" % obj.__class__.__name__]} 746 for k in obj.get_fields(): 747 t = jsonclasshintify(obj[k]) 748 res[k] = t 749 print("props", res, dir(obj)) 750 for attribute in dir(obj): 751 if not attribute.startswith("get_"): 752 continue 753 k = attribute[4:] 754 if k in res: 755 continue 756 getter = getattr(obj, attribute, None) 757 if callable(getter): 758 res[k] = jsonclasshintify(getter()) 759 return res 760 return obj
761 762 #============================================================ 763 if __name__ == '__main__': 764 765 if len(sys.argv) < 2: 766 sys.exit() 767 768 if sys.argv[1] != 'test': 769 sys.exit() 770 771 #--------------------------------------------------------
772 - class cTestObj(cBusinessDBObject):
773 _cmd_fetch_payload = None 774 _cmds_store_payload = None 775 _updatable_fields = [] 776 #----------------------------------------------------
777 - def get_something(self):
778 pass
779 #----------------------------------------------------
780 - def set_something(self):
781 pass
782 #-------------------------------------------------------- 783 from Gnumed.pycommon import gmI18N 784 gmI18N.activate_locale() 785 gmI18N.install_domain() 786 787 data = { 788 'pk_field': 'bogus_pk', 789 'idx': {'bogus_pk': 0, 'bogus_field': 1, 'bogus_date': 2}, 790 'data': [-1, 'bogus_data', datetime.datetime.now()] 791 } 792 obj = cTestObj(row=data) 793 #print(obj['wrong_field']) 794 #print(jsonclasshintify(obj)) 795 #obj['wrong_field'] = 1 796 #print(obj.fields_as_dict()) 797 print(obj.format()) 798 799 #============================================================ 800