id,summary,reporter,owner,description,type,status,priority,milestone,component,version,severity,resolution,keywords,cc 995,performance design flaw in bb_update_meta,_ck_,,"There is an inherited, legacy performance issue in bb_update_meta that causes excessive queries when meta is updated in the db. There is a failed logic problem that does a SELECT query for the row about to be updated to determine UPDATE/INSERT instead of using ON DUPLICATE KEY, with a key in the table on object_id + meta_key. Originally I thought maybe meta didn't have a key like that because it was designed on purpose to tolerate multiple meta_keys with the same name and the same object_id. But this makes no sense since bb_update_meta will force all the same object_id + meta_keys to have the same data. So it's pointless, you can't have different data. The problem with this is, it allows corruption of the meta by accidental creation of multiple rows with the same object_id + meta_key. BUT the data MUST ALWAYS be the same in such duplicate rows, because update meta does the SELECT query and will then before an UPDATE, with LIMIT to affect all the rows. The table will then just keep the extra duplicate rows, and the SELECT in the update meta happens over, and over and over again. This design flaw in update meta causes the queries to double for every plugin that uses it and wastes the mysql cache. I'd like to see this addressed even in 0.9 as a patch, with a routine to first check the meta tables for duplicates, remove the duplicates, then add the table key index, then replace the bb_update_meta routine to use INSERT ON DUPLICATE KEY instead of SELECT/INSERT/UPDATE. ",defect (bug),closed,normal,1.1,Back-end,,normal,wontfix,,