Oracle: statement-level write consistency question.

vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

Tengiz,

Let me respond to your two posting at one go ...

tengiz wrote:Here's another one - what if there is no predicate at all? Do you believe that in this case Oracle should never re-scan? If so, then how would you like the following scenario. Session 1:

Code: Select all

drop table a;
create table a
(
   pk int primary key,
   fk int null,
   data int null,
   constraint c_fk foreign key (fk) references a (pk)
);
insert into a values (1, null, null);
commit;
select * from a;

insert into a values (2, 1, null);
update a set fk = 2 where pk = 1;
select * from a;
...
        PK         FK       DATA
---------- ---------- ----------
         1
...
        PK         FK       DATA
---------- ---------- ----------
         1          2
         2          1
Session 2:

Code: Select all

update a set data = fk;
select * from a;
commit;
Session 1:

Code: Select all

commit;
Session 2:

Code: Select all

        PK         FK       DATA
---------- ---------- ----------
         1          2          2 <-- looks like we did see the latest committed state.
         2          1            <-- nope, not true for this one

Do you believe that the result produced by session 2 is consistent? If so, then I
rest my case.


This case is no different from the cases you've presented earlier. Without a predicate, the conceptual 'consistent select' will select the whole table as of the point in time Session 2's update was issued, hence, only one row is updated. No surprise here, right ? As to my belief, see below.

tengiz wrote:Now, if in the session 2 instead of <update a set data = fk> you'd issue <update a set pk = pk, data = fk>, the result would be different:

Code: Select all

        PK         FK       DATA
---------- ---------- ----------
         1          2          2
         2          1          1

And this is consistent indeed.


This is not surprising either -- the locked low predicate column changed causing Sessio 2's transaction abort and restart.

In your previous posting you wrote:

tengiz wrote:vc, возможно я, как всегда, нечётко выражаюсь. Попробую слегка по-другому:

Я прекрасно понимаю, что делает Oracle. Ваши объяснения как это происходит не вызывают у меня почти никаких возражений. Проблема в том, что я считаю, что то что делает Oracle: во-первых, неправильно; во-вторых, не соответствует тому, что обещано в документации.

Почему я считаю, что результат, полученный в этих опытах неверный? Потому что транзакция атомарна по определению. Никакие других транзакции не должны видеть половину зафиксированного состояния (случай грязного чтения, очевидно, игнорируем). Или всё, или ничего. А Oracle на RC этого не обеспечивает в рассматриваемом примере не то что для транзакции, а даже для отдельного оператора, который тоже атомарен по определению - чего требует и стандарт, и формальная теория, и просто здравый смысл. Но оператор, тем не менее, отработал так, как если бы ему была представлена только часть зафиксированных данных.


Atomic transaction .. A mighty big word. Let's apply some defintion to it and see if Oracle satisfies it:
"a transaction is a unit of operation. It's atomic, i.e. either all the transaction's actions are completed or none are". That's all.
Surely Oracle conforms to the definition -- we have not observed partial rollbacks or commits yet, right ? I do not believe you showed that Oracle sees "a half fixed transaction".

Let's recall, for the benefit of those who might be reading this stuff, how Oracle performs an update/delete. A consistent set of rows is selected based on the predicate, if any, and the timestamp (each Oracle block has one) corresponding to the point in time the update statement was issued.

Say, we have two transactions A and B. Transaction A issues an update statement with the T1 timestamp and the isolation level is READ COMMITTED. When A arrives to some row, it discovers that the row is locked because B has already updated it. A blocks and waits until B commits. After B commits, there are two possible outcomes:

Case 1. The predicate value for the locked row did not change. Transaction A proceeds to update the set of rows it selected at T1. What people are usually upset about is the fact that A does not pick up the _new_ rows satisfying A's predicate. However, the READ COMMITTED isolation level never promised that, non-repeatable reads are the phenomenon explicitly allowed by this IL both in SLQ'92 and '99. How this situation is different in comparison to the case when a simple select statement runs and another transaction added/deleted/modified some rows whilst the select was being executed ? One would get exactly the same situation as with updates, namely non-repeatable reads.

Case 2. The predicate value changed. Transaction A has no other choice but to abort because it 'lost' the locked row it was going to operate on -- the original selection as of T1 is no longer consistent precisely because of the row loss. So transaction A rolls back all the changes it might have made and starts from scratch using the new T2 timestamp as of the point in time when B committed. In Case 2, getting new rows, if any, is a side benefit, not a requirement.

In both cases Oracle operates over a consistent selection, in Case 1 as of T1 and in Case B as of T2. In none of the above cases, a partially committed set of rows is used.

Demanding that A should behave in Case 1 in the same way as it did in Case 2 would be a call for REAPEATABLE READS which can be accomplished by
a) running the update transaction in Oracle's Serializable mode;
or
b) using pessimistic locking via SELECT FOR UPDATE;
or
c) implementing optimistic locking via timestamping or checksumming.

tengiz wrote:Причина - алгоритм, который применяет Oracle для разруливания конфликтующих обновлений, для оптимизации производительности с целью избежать ненужных (на взгляд Oracle) перезапусков, на мой взгляд имеет серьёзный дефект, который приводит к искажению данных. Круг замкнулся - именно с этого я и начинал эту дискуссию.


I do not think you presented a convincing proof of the purported defect, and I am easy to convince, just recall our discussion of whether Oracle's Serializable is really serializable ( No). At least, show where exactly the RC contract is violated.

tengiz wrote:Попробуйте, кстати, проделать такой же опыт на SQL Server или Sybase.


If we want to decide whether a specific implementation is 'good enough', we must use a common yardstick, like ACID, isolation levels etc. Specific implementation pecularities are irrelevant in this context, although very important in their own right if we want to use a specific implementation to its fullest.
Besides, I do not like, in general, database pissing contests. After all, it's just a tool more or less suitable for a specific task(s).

However, to spice up a little our discussion, here's a little tit-for-tat:

Under our favourite isolation level READ COMMITTED, no blocking database (Sybase, DB2, etc.), guarantees correct results for such simple queries as
'select sum(val) from t1 <predicate>'.
They may and sometimes will return bogus results that have at no point in time, ever existed in the database. Now, that's really 'inconsistent', right ? ;)

Rgds.
Last edited by vc on 05 Nov 2003 18:56, edited 1 time in total.
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

camel wrote:Мое представление, возможно несколько упрощенное, того, как работает UPDATE в оракле.

Команда UPDATE выполняется в два этапа - поиск строк и их модификация.

Первый этап - поиск строк выполняется сразу же в момент подачи команды UPDATE, скажем в момент t1. Поиск строк для обновления, соответствующих критерию WHERE, выполняется точно так же, как это происходит в SELECT, т.е. для строк, заблокированных другими незавершенными транзакциями, используются данные из сегментов отката.

Если найденные строки - кандидаты на обновление в момент t1 заблокированы другой незавершенной транзакцией, второй этап выполнения UPDATE откладывается до снятия этих блокировок, т.е. до момента t2.

После снятия блокировок от других транзакций, в момент t2, выполняется второй этап UPDATE - модификация текущих в этот момент версий строк-кандидатов, причем проверяется, чтобы критерий WHERE все еще выполнялся, иначе строка обновлена не будет.

В некоторых случаях в момент t2 оракл выполняет повторный поиск строк для обновления, что на первый взгляд запутывает картину, но по-существу ничего не меняет, потому что это просто переносит момент начала всей команды UPDATE с t1 на t2.

Основным моментом, вызывающим дискуссию, по-видимому является то, что поиск строк выполняется в момент t1, когда изменения от других транзакций еще не зафиксированы и поэтому не видны, а обновление строк происходит в момент t2, когда эти изменения уже видны. Естественно, что строки уже не те, которые были в момент их поиска. Это может привести к проблемам типа "lost update", что логически вполне понятно. Там, где этот риск неприемлем, следует использовать SELECT FOR UPDATE или SERIALIZABLE.


Your understanding of how, conceptually, an update is executed is absolutely correct.

Rgds.
J.K.
Уже с Приветом
Posts: 711
Joined: 18 Jun 2003 06:58
Location: su/us

Post by J.K. »

camel wrote:Основным моментом, вызывающим дискуссию, по-видимому является то, что поиск строк выполняется в момент t1, когда изменения от других транзакций еще не зафиксированы и поэтому не видны, а обновление строк происходит в момент t2, когда эти изменения уже видны. Естественно, что строки уже не те, которые были в момент их поиска. Это может привести к проблемам типа "lost update", что логически вполне понятно. Там, где этот риск неприемлем, следует использовать SELECT FOR UPDATE или SERIALIZABLE.


Совершенно верно, речь идет только о начале транзакции.
Oracle автоматически открывает транцакцию при Update, Delete, Insert, Select for update set transaction read only.
По умолчанию Oracle работает со строками а не с таблицей.

Если мы хотим получить тот результат о котором мечтает автор, то мы должны заблокировать таблицу во второй тарнщакции и когда она будет разблокирована, начнеться второй update (В общем-то как это делает SQL SERVER & др.), будет happy end of this story.

По моему это неправильно блокировать всю таблицу перед выбором строк. Почему я должен кого-то ждать если мне нужен другой кортеж?
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

Dmitry67 wrote:tengiz, а можете придмать на основе этого более практический пример, ну например что нибудь вроде есть банк, в банке со счета на счет трензакции перебрасывают деньги. Общая сумма при этом остается неизменной. Если бы удалось построить кверь нарушающую некий инвариант (сумма денег на всех счетах константа) то вот это и было бы стопроцентно показательным неправильным результатом, против которого уже не попрешь

Тот самый пример, который я приводил в первом сообщении, как раз и был попыткой соорудить реальный случай с инвариантом. В качестве инварианта выступает сумма колонки pending - так как делается переброска фруктов из корзины в корзину, то эта сумма в любой момент времен должна быть равна нулю. Все транзакции в примере сделаны так, чтобы этот инвариант сохранялся - в этом легко убедиться. Однако, конечный результат показывает, что Oracle этого не обеспечил. Когда я попытался защитить этот инвариант констрейнтом, чтобы нарушения явно ловились, оказалось, что я не знаю элементарного решения для такой задачки в случае Oracle. В SQL Server, как Вы прекрасно знаете, для этого просто делается триггер, считающий сумму этой колонки в inserted и deleted, если сумма не равна нулю, то ошибка и откат. Вариант Oracle - это for each row триггер, который собирает эти суммы строка за строкой. Я посчитал, что это усложнит элементарный пример, поэтому я и не стал с этим связываться.
Cheers
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc, по третьему кругу повторять одно и то же мне уже смысла мало, поэтому только пара замечаний:
This is not surprising either -- the locked low predicate column changed causing Sessio 2's transaction abort and restart.

Может, я опять сопроводил свой пример нечёткими объяснениями? В этом примере нет предикатов вообще. И почему, как Вы полагаете, Oracle решил перестартовать оператор, если его внутренняя логика работы позволяет ему этого не делать? Напоминаю, что во всех остальных случаях, перезапуск делался только при изменении значения колонки входящей в предикат. В этом примере - во-первых, нет предиката вообще; во-вторых, и значение колонки, в действительности, не меняется - <update a set data = fk> не вызывает перезапуска, а <update a set pk = pk, data = fk>, как это ни странно, вызывает. Можете объяснить почему?

Теперь по поводу Ваших рассуждений о стандарте и пр:

1. Если Вам не кажется убедительным, что любой одиночный update оператор не должен оставлять за собой строки, не соответствующие ни одному зафиксированному состоянию, то здесь больше говорить не о чем. Моего красноречия явно не хватило, sorry.

2. READ COMMITTED в Oracle, согласно документации Oracle - это больше, чем ANSI READ COMMITTED. А именно, read consistency и write cosistency на уровне отдельного оператора, под отдельным оператором, на самом деле, понимается всё, что это оператор автоматически может вызвать, включая весь код в триггерах. Поэтому ссылаться на стандарт, как на объяснение почему реально не обеспечивается обещанное документацией - просто несерьёзно, согласитесь.

3. Маленькая поправка - REPEATABLE READ не устраняет фантомы, Вы это разумеется прекрасно знаете, поэтому я полагаю, что в Ваших рассуждениях о том, как сделать так, чтобы новая строка, удовлетворяющая предикату попала или не попала в выборку упоминание REPEATABLE READ - простая оговорка.
Cheers
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

camel wrote:Это может привести к проблемам типа "lost update", что логически вполне понятно. Там, где этот риск неприемлем, следует использовать SELECT FOR UPDATE или SERIALIZABLE.

Camel, у меня нет принципиальных возражений по поводу объяснений, что конкретно делает Oracle. Но, как я уже говорил, я полагаю, что то, что реально происходит, не соответствует тому, что обещано в документации.

По поводу "lost update" - согласно ANSI SQL "lost update" не допускается на любом уровне изоляции. Если бы Oracle допускал "lost update" на READ COMMITTED, то он бы не прошёл сертификацию. Наблюдаемое искажение данных (а я полагаю это искажением с точки зрения корректно написанного приложения, не делающего никаких дополнительных предположений, кроме того, что сказано в документации) - это всё-таки не такой страшный грех, как если бы это был "lost update".
Cheers
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

J.K. wrote:Если мы хотим получить тот результат о котором мечтает автор, то мы должны заблокировать таблицу во второй тарнщакции и когда она будет разблокирована, начнеться второй update (В общем-то как это делает SQL SERVER & др.), будет happy end of this story.

Wrong - блокировок на строки для обеспечения правильного результата здесь более, чем достаточно. И именно так это и делается в SQL Server, DB2 и пр.
Cheers
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

tengiz,

I'll try to respond to two of your messages at once, again.

tengiz wrote:vc, по третьему кругу повторять одно и то же мне уже смысла мало, поэтому только пара замечаний:


I do not have a problem with one's trying to make one's point clearer and repeating it several times. E.g. I may be dumb, and need to listen to the same stuff, presented in different ways, in order to understand it.

tengiz wrote:
This is not surprising either -- the locked low predicate column changed causing Sessio 2's transaction abort and restart.

Может, я опять сопроводил свой пример нечёткими объяснениями? В этом примере нет предикатов вообще. И почему, как Вы полагаете, Oracle решил перестартовать оператор, если его внутренняя логика работы позволяет ему этого не делать? Напоминаю, что во всех остальных случаях, перезапуск делался только при изменении значения колонки входящей в предикат. В этом примере - во-первых, нет предиката вообще; во-вторых, и значение колонки, в действительности, не меняется - <update a set data = fk> не вызывает перезапуска, а <update a set pk = pk, data = fk>, как это ни странно, вызывает. Можете объяснить почему?


Sorry I goofed on this one, did not look carefully and 'saw' a 'where clause' in the update (update a set pk = pk, data = fk) instead of just the second column update. Cannot do two at once, can I ?

Here, we have a foreign key constraint which triggers a re-scan because of the primary key update (bad practice). The primary key changed, therefore, Oracle has to obtain the latest version of the child row to verify if the constraint is not violated, thereby, the latest version of the row becomes available for the update. The situation is similar to the predicate check, really. I was not so wrong, after all ;)

tengiz wrote:Теперь по поводу Ваших рассуждений о стандарте и пр:

1. Если Вам не кажется убедительным, что любой одиночный update оператор не должен оставлять за собой строки, не соответствующие ни одному зафиксированному состоянию, то здесь больше говорить не о чем. Моего красноречия явно не хватило, sorry.


Sorry, but you can do better than that. We are not in a debate club and our oration skills or lack thereof is irrelevant. Please tell me what exactly is violated in the RC contract when Oracle executes its update.

tengiz wrote:2. READ COMMITTED в Oracle, согласно документации Oracle - это больше, чем ANSI READ COMMITTED. А именно, read consistency и write cosistency на уровне отдельного оператора, под отдельным оператором, на самом деле, понимается всё, что это оператор автоматически может вызвать, включая весь код в триггерах. Поэтому ссылаться на стандарт, как на объяснение почему реально не обеспечивается обещанное документацией - просто несерьёзно, согласитесь.


Please provide a URL for a meaningful discussion. I just do not want to speculate.

tengiz wrote:3. Маленькая поправка - REPEATABLE READ не устраняет фантомы, Вы это разумеется прекрасно знаете, поэтому я полагаю, что в Ваших рассуждениях о том, как сделать так, чтобы новая строка, удовлетворяющая предикату попала или не попала в выборку упоминание REPEATABLE READ - простая оговорка.


Sorry, I meant phantoms, of course.


Onward to the second message:

tengiz wrote:Тот самый пример, который я приводил в первом сообщении, как раз и был попыткой соорудить реальный случай с инвариантом. В качестве инварианта выступает сумма колонки pending - так как делается переброска фруктов из корзины в корзину, то эта сумма в любой момент времен должна быть равна нулю. Все транзакции в примере сделаны так, чтобы этот инвариант сохранялся - в этом легко убедиться. Однако, конечный результат показывает, что Oracle этого не обеспечил. Когда я попытался защитить этот инвариант констрейнтом, чтобы нарушения явно ловились, оказалось, что я не знаю элементарного решения для такой задачки в случае Oracle. В SQL Server, как Вы прекрасно знаете, для этого просто делается триггер, считающий сумму этой колонки в inserted и deleted, если сумма не равна нулю, то ошибка и откат. Вариант Oracle - это for each row триггер, который собирает эти суммы строка за строкой. Я посчитал, что это усложнит элементарный пример, поэтому я и не стал с этим связываться.


The solutions are:
1. using 'SELECT FOR ... UPDATE" as you discovered;
2. using Oracle's Snapshot IL wherein the second transaction would fail on the update. The transaction could be re-tried programmatically until it succeeds, or it can be run as a batch job off-hours so as not to clash with the presumably interactive Session 1;
3. A design like that is a big _bug_. Why update the critical column ('Pending') in a separate transaction ? What logic, if any, might substantiate this approach ?

Rgds.
User avatar
camel
Новичок
Posts: 86
Joined: 06 Dec 2002 18:21

Post by camel »

tengiz wrote:
camel wrote:Это может привести к проблемам типа "lost update", что логически вполне понятно. Там, где этот риск неприемлем, следует использовать SELECT FOR UPDATE или SERIALIZABLE.

Camel, у меня нет принципиальных возражений по поводу объяснений, что конкретно делает Oracle. Но, как я уже говорил, я полагаю, что то, что реально происходит, не соответствует тому, что обещано в документации.

По поводу "lost update" - согласно ANSI SQL "lost update" не допускается на любом уровне изоляции. Если бы Oracle допускал "lost update" на READ COMMITTED, то он бы не прошёл сертификацию. Наблюдаемое искажение данных (а я полагаю это искажением с точки зрения корректно написанного приложения, не делающего никаких дополнительных предположений, кроме того, что сказано в документации) - это всё-таки не такой страшный грех, как если бы это был "lost update".

в Read Committed "lost updates" гарантированно будут отсутствовать только при использовании select for update (nowait) - если где-то написано без этой оговорки, то это баг в документации, а не в оракле :wink:

впрочем, возможно мы понимаем под "lost update" разные вещи - я говорю о ситуации, когда мой UPDATE перекрывает UPDATE сделанный другой транзакцией, которая завершилась уже после того, как мой UPDATE выбрал строку для обновления
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc wrote:Here, we have a foreign key constraint which triggers a re-scan because of the primary key update (bad practice). The primary key changed, therefore, Oracle has to obtain the latest version of the child row to verify if the constraint is not violated, thereby, the latest version of the row becomes available for the update. The situation is similar to the predicate check, really. I was not so wrong, after all ;)

If Oracle obtained the latest version of the detail row for the update statement in question, then why the statement completed as if it didn't see it?

Please provide a URL for a meaningful discussion. I just do not want to speculate.

See the online docs - Data Concurrency and Consistency:
Statement-Level Read Consistency

Oracle always enforces statement-level read consistency. This guarantees that all the data returned by a single query comes from a single point in time--the time that the query began. Therefore, a query never sees dirty data nor any of the changes made by transactions that commit during query execution. As query execution proceeds, only data committed before the query began is visible to the query. The query does not see changes committed after statement execution begins.

A consistent result set is provided for every query, guaranteeing data consistency, with no action on the user's part. The SQL statements SELECT, INSERT with a subquery, UPDATE, and DELETE all query data, either explicitly or implicitly, and all return consistent data. Each of these statements uses a query to determine which data it will affect (SELECT, INSERT, UPDATE, or DELETE, respectively).

A SELECT statement is an explicit query and can have nested queries or a join operation. An INSERT statement can use nested queries. UPDATE and DELETE statements can use WHERE clauses or subqueries to affect only some rows in a table rather than all rows.

Queries used in INSERT, UPDATE, and DELETE statements are guaranteed a consistent set of results. However, they do not see the changes made by the DML statement itself. In other words, the query in these operations sees data as it existed before the operation began to make changes.

No comments here - it's all crystal clear.
... A design like that is a big _bug_. Why update the critical column ('Pending') in a separate transaction ? What logic, if any, might substantiate this approach?

Hm, what exacltly is a 'bug' there? Could you please go over my original example and point me to the 'design bug'? Please note, that all the transaction make sure that the above invariant (sum(pending) = 0) is preserved. Yet, a single well-formed statement, not a multi-statement transaction, can easily violate it.

A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit you a transfer (put corresponding amounts in the 'pending' column - and you always do it preserving the above invariant) in case the whole transaction is in doubt. Second, after you got all the neccessary confirmations you clear all the pending transfrers making the corresponding account balance changes. Neither stage leaves the data in inconsistent state.
Last edited by tengiz on 06 Nov 2003 01:08, edited 1 time in total.
Cheers
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

camel wrote:в Read Committed "lost updates" гарантированно будут отсутствовать только при использовании select for update (nowait) - если где-то написано без этой оговорки, то это баг в документации, а не в оракле :wink:

впрочем, возможно мы понимаем под "lost update" разные вещи - я говорю о ситуации, когда мой UPDATE перекрывает UPDATE сделанный другой транзакцией, которая завершилась уже после того, как мой UPDATE выбрал строку для обновления

Что такое "lost update"?

1. В ячейке X хранится значение 0.
2. Транзакция T1 хочет записать в X значение 5, для чего она сначала запоминает старое значение 0 в журнале, после чего пишет 5 в X .
3. Транзакция T2 хочет записать в X значение 7, для чего она сначала запоминает старое значение 0 в журале, после чего пишет 7 в X и фиксируется.
4. Транзакция T1 передумала и решила откатиться, для чего она берёт старое значение из журнала и пишет 0 в X. В результате откатывая и себя и другого парня T2.

Ни Oracle, ни любой другой сервер, прошедший ACID сертификацию TPC комитета не даёт этого сделать ни на каком уровне изоляции благодаря блокировкам, наложенным на все изменяемые строки.

<added>
Дабы избежать недоразумений - это "lost update" в смысле ANSI/ISO. Есть ещё другие варианты. Например такой: хотим, чтобы сценарий типа UPDATE tab SET col = col + 5... никогда не приводил к неправильным результатам в col. Oracle вполне этому соответствует на любом уровне изоляции. Что, разумеется, неверно в случае когда мы одним оператором сначала читаем значение col, а другим записываем.
</added>
Cheers
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

tengiz wrote:
vc wrote:Here, we have a foreign key constraint which triggers a re-scan because of the primary key update (bad practice). The primary key changed, therefore, Oracle has to obtain the latest version of the child row to verify if the constraint is not violated, thereby, the latest version of the row becomes available for the update. The situation is similar to the predicate check, really. I was not so wrong, after all ;)

If Oracle obtained the latest version of the detail row for the update statement in question, then why the statement completed as if it didn't see it?


You've lost me here. What do you mean by 'as if it didn't see it' ? Please re-phrase.


tengiz wrote:
A consistent result set is provided for every query, guaranteeing data consistency, with no action on the user's part. The SQL statements SELECT, INSERT with a subquery, UPDATE, and DELETE all query data, either explicitly or implicitly, and all return consistent data. Each of these statements uses a query to determine which data it will affect (SELECT, INSERT, UPDATE, or DELETE, respectively).

No comments here - it's all crystal clear.


Apparently, not so clear. The phrase

"The SQL statements SELECT, INSERT with a subquery, UPDATE, and DELETE all query data, either explicitly or implicitly, and all return consistent data. Each of these statements uses a query to determine which data it will affect "

... says that the query part of the above statements returns a consistent set of data. These data (rowids, to use Oracle jargon) are used "to determine which data it [the update statement] will affect". There is no additional promise as regards the outcome of such update unless one 'over-interprets' the prase

"... and all return consistent data."

If you are unhappy about the ambiguity, we can file a documentation bug with Oracle. As I mentioned before, what is considered to be the consistent part of an update/delete/insert statement is the 'conceptual' (or real) subquery only.

tengiz wrote:
... A design like that is a big _bug_. Why update the critical column ('Pending') in a separate transaction ? What logic, if any, might substantiate this approach?

Hm, what exacltly is a 'bug' there? Could you please go over my original example and point me to the 'design bug'? Please note, that all the transaction make sure that the above invariant (sum(pending) = 0) is preserved. Yet, a single well-formed statement, not a multi-statement transaction, can easily violate it.

It would be a bug if both transactions were highly interactive -- it was not clear in what scenario you wanted to use the example and how often you intended to run the second update. Mainly, I was not happy about locking behaviour of the two update transactions and performance implications of such behaviour -- the statements validity was not an issue.

tengiz wrote:A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit you a transfer (put corresponding amounts in the 'pending' column - and you always do it preserving the above invariant) in case the whole transaction is in doubt. Second, after you got all the neccessary confirmations you clear all the pending transfrers making the corresponding account balance changes. Neither stage leaves the data in inconsistent state.


If this is the intended use for the second update -- a nightly batch job --, then the task is trivially solved in Oracle by using SELECT FOR ... UPDATE without much effort on the implementor's part and zero impact on reporting.

Rgds.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc wrote:You've lost me here. What do you mean by 'as if it didn't see it' ? Please re-phrase.
Please see the results of the last example I posted - the data says it all.
Apparently, not so clear…. If you are unhappy about the ambiguity, we can file a documentation bug with Oracle. As I mentioned before, what is considered to be the consistent part of an update/delete/insert statement is the 'conceptual' (or real) subquery only.

Few more words of wisdom from Oracle - Data Concurrency and Consistency::
Transaction Set Consistency.

A useful way to view the read committed and serializable isolation levels in Oracle is to consider the following scenario: Assume you have a collection of database tables (or any set of data), a particular sequence of reads of rows in those tables, and the set of transactions committed at any particular time. An operation (a query or a transaction) is transaction set consistent if all its reads return data written by the same set of committed transactions. An operation is not transaction set consistent if some reads reflect the changes of one set of transactions and other reads reflect changes made by other transactions. An operation that is not transaction set consistent in effect sees the database in a state that reflects no single set of committed transactions.

Oracle provides transactions executing in read committed mode with transaction set consistency for each statement. Serializable mode provides transaction set consistency for each transaction.

I unambiguously conclude from the above, that in read committed no single statement transaction can possibly leave behind a database state that could not be produced by the very same statement being run in serializable isolation level. The only noticeable difference is that for repeatable read you have the added benefit of automatic rollback/restart whereas for serializable you should do it manually.

As soon as anybody comes up with a tweak for the presented scenario with apples in bins when a well-formed single-statement serializable transaction produces the results you claim to be perfectly consistent, than I would consider the whole issue resolved. Any takers?

It would be a bug if both transactions were highly interactive -- it was not clear in what scenario you wanted to use the example and how often you intended to run the second update. Mainly, I was not happy about locking behaviour of the two update transactions and performance implications of such behaviour -- the statements validity was not an issue.

Well, I presumed that we all realized that the performance and concurrency was not the point, as well as anybody's ability to present quality design if needed. My only real concern is and has been correctness. Who would care how fast your system produces incorrect/inconsistent data?

If this is the intended use for the second update -- a nightly batch job --, then the task is trivially solved in Oracle by using SELECT FOR ... UPDATE without much effort on the implementor's part and zero impact on reporting.

How do I - the application developer - know that I should use the select for update in this case? Which part of the standard documentation says that?
Cheers
User avatar
camel
Новичок
Posts: 86
Joined: 06 Dec 2002 18:21

Post by camel »

tengiz wrote:
camel wrote:в Read Committed "lost updates" гарантированно будут отсутствовать только при использовании select for update (nowait) - если где-то написано без этой оговорки, то это баг в документации, а не в оракле :wink:

впрочем, возможно мы понимаем под "lost update" разные вещи - я говорю о ситуации, когда мой UPDATE перекрывает UPDATE сделанный другой транзакцией, которая завершилась уже после того, как мой UPDATE выбрал строку для обновления

Что такое "lost update"?

1. В ячейке X хранится значение 0.
2. Транзакция T1 хочет записать в X значение 5, для чего она сначала запоминает старое значение 0 в журнале, после чего пишет 5 в X .
3. Транзакция T2 хочет записать в X значение 7, для чего она сначала запоминает старое значение 0 в журале, после чего пишет 7 в X и фиксируется.
4. Транзакция T1 передумала и решила откатиться, для чего она берёт старое значение из журнала и пишет 0 в X. В результате откатывая и себя и другого парня T2.

Ни Oracle, ни любой другой сервер, прошедший ACID сертификацию TPC комитета не даёт этого сделать ни на каком уровне изоляции благодаря блокировкам, наложенным на все изменяемые строки.

<added>
Дабы избежать недоразумений - это "lost update" в смысле ANSI/ISO. Есть ещё другие варианты. Например такой: хотим, чтобы сценарий типа UPDATE tab SET col = col + 5... никогда не приводил к неправильным результатам в col. Oracle вполне этому соответствует на любом уровне изоляции. Что, разумеется, неверно в случае когда мы одним оператором сначала читаем значение col, а другим записываем.
</added>

я имел в виду "lost update" в общеизвестном смысле, как например в книге Philip Bernstein et al. Concurrency Control and Recovery in Database Systems ftp://ftp.research.microsoft.com/users/ccontrol/Chapter1.pdf
в этом случае то, что я написал выше - верно

Returning to our banking example, suppose we have a program called Deposit, which deposits money into an account.

Procedure Deposit begin
Start;
input( account#, amount);
temp : = Read(Accounts[account#]);
temp : = temp + amount;
Write(Accounts[account#], temp);
Commit
end

Suppose account 13 has a balance of $1000 and customer 1 deposits $100 into account 13 at about the same time that customer 2 deposits $100,000 into account 13. Each customer invokes the Deposit program thereby creating a transaction to perform his or her update. The concurrent execution of these Deposits produces a sequence of Reads and Writes on the database, such as

Read1(Accounts[13]) returns the value $1000
Read2(Accounts[l3]) returns the value $1000
Write2(Accounts[l3], $101,000)
Commit2
Write1(Accounts[l3], $1100)
Commit1

The result of this execution is that Accounts[l3] contains $1100. Although customer 2’s deposit was successfully accomplished, its interference with customer l’s execution of Deposit caused customer 2’s deposit to be lost. This <b>lost update</b> phenomenon occurs whenever two transactions, while attempting to modify a data item, both read the item’s old value before either of them writes the item’s new value.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

camel wrote:я имел в виду "lost update" в общеизвестном смысле...

Да, я понял, поэтому и добавил оговорку чтобы не было недоразумений. Но если Вы обратитесь к ANSI/ISO стандарту, то увидите, там это понятие имеет другой смысл. В стандарте феномен "lost update" отсутствует в таблице среди аномалий сериализации. Но ссылка на недопустимость потерянных изменений содержится в тексте стандарта именно в том смысле, который я приводил. Другое название для того, что упоминается в ANSI/ISO - "dirty write".
Cheers
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

tengiz wrote:
vc wrote:You've lost me here. What do you mean by 'as if it didn't see it' ? Please re-phrase.
Please see the results of the last example I posted - the data says it all.


No, not to me, if we are talking about the <update a set pk = pk, data = fk> statement outcome. But, if for some reason you choose not to explain what you meant by "then why the statement completed as if it didn't see it", that's cool with me.

tengiz wrote:
Apparently, not so clear…. If you are unhappy about the ambiguity, we can file a documentation bug with Oracle. As I mentioned before, what is considered to be the consistent part of an update/delete/insert statement is the 'conceptual' (or real) subquery only.

Few more words of wisdom from Oracle - Data Concurrency and Consistency::
Transaction Set Consistency.

A useful way to view the read committed and serializable isolation levels in Oracle is to consider the following scenario: Assume you have a collection of database tables (or any set of data), a particular sequence of reads of rows in those tables, and the set of transactions committed at any particular time. An operation (a query or a transaction) is transaction set consistent if all its reads return data written by the same set of committed transactions. An operation is not transaction set consistent if some reads reflect the changes of one set of transactions and other reads reflect changes made by other transactions. An operation that is not transaction set consistent in effect sees the database in a state that reflects no single set of committed transactions.

Oracle provides transactions executing in read committed mode with transaction set consistency for each statement. Serializable mode provides transaction set consistency for each transaction.

I unambiguously conclude from the above, that in read committed no single statement transaction can possibly leave behind a database state that could not be produced by the very same statement being run in serializable isolation level.


The text you are quoting describes the outcome of read (select) operations for the RCIL and SIL. Nowhere does it deal with write operations executed under the RCIL or SIL.

What it says is very simple. A select (sub)query will get a consistent set of rows. The consistency is ensured at the statement level for the RCIL and at the whole transaction level for the SIL. If the very same select statement is re-executed, you may get a different consistent set under the RCIL, but should get the same set under the SIL as long as the SIL transaction lasts. That's it.

Again, what matters is the real database behaviour, the documentation may contain bugs, omissions and ambiguities. If one is unsure of what a database does, one has to experiment, run test cases, etc.

tengiz wrote:As soon as anybody comes up with a tweak for the presented scenario with apples in bins when a well-formed single-statement serializable transaction produces the results you claim to be perfectly consistent, than I would consider the whole issue resolved. Any takers?


You've lost me here again. Firstly, we've been discussing all the time the READ COMMITTED environment, why all of a sudden we are switching to the SERIALIZABLE ? Secondly, I've not claimed anything at all about "the results being perfectly consistent", rather I've been patiently trying to explain what the expression "selecting a consistent set of rows as of some point in time" means and how an update obtaines such a set. Thirdly, several reasonable solutions were offered for the problem you presented (SERIALIZABLE; SELECT FOR UPDATE).

tengiz wrote:
It would be a bug if both transactions were highly interactive -- it was not clear in what scenario you wanted to use the example and how often you intended to run the second update. Mainly, I was not happy about locking behaviour of the two update transactions and performance implications of such behaviour -- the statements validity was not an issue.

Well, I presumed that we all realized that the performance and concurrency was not the point, as well as anybody's ability to present quality design if needed. My only real concern is and has been correctness. Who would care how fast your system produces incorrect/inconsistent data?


Well, I strongly disagree. Nobody's arguing with the statement that the solution has to be correct but correctness alone is not sufficient in real life. What's the use of a correct solution if you cannot produce results reasonably fast ? Performance is a crucial factor that more often than not influences application design dramatically. But I am sure you know this.

tengiz wrote:
If this is the intended use for the second update -- a nightly batch job --, then the task is trivially solved in Oracle by using SELECT FOR ... UPDATE without much effort on the implementor's part and zero impact on reporting.

How do I - the application developer - know that I should use the select for update in this case? Which part of the standard documentation says that?


As a developer working with a database one is supposed to know this stuff, it's part of the job description, really. The database state-of-the "art" today is such that database independence is a myth. You have to design a database-backed application being fully aware of a specific DB engine pecularities, especially with respect to locking/concurrency.

Rgds.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc wrote:No, not to me, if we are talking about the <update a set pk = pk, data = fk> statement outcome.

Never mind, I misinterpreted the context of the question and in this particular case (the statement touched the primary key) I was the one who goofed - it was the other case (no primary key update) when it didn't see the consistent state. Sorry. But one question still remains - the key didn't really change (pk = pk), why even bother to check whether the detail row existed?

The text you are quoting describes the outcome of read (select) operations for the RCIL and SIL. Nowhere does it deal with write operations executed under the RCIL or SIL.... What it says is very simple. A select (sub)query will get a consistent set of rows. The consistency is ensured at the statement level for the RCIL and at the whole transaction level for the SIL. If the very same select statement is re-executed, you may get a different consistent set under the RCIL, but should get the same set under the SIL as long as the SIL transaction lasts. That's it.

vc, any type of write activity (insert/update/delete) has to be presented with some database state which then is going to be transformed. No persistent data can be written based on anything but the previous database state, the application logic and some external data - and let me repeat: there must be only one observed database state for queries and for updates. Otherwise the following two statements could have produced different data:

Code: Select all

update tab set col1 = (select col2 from tab key = 123) where t.key = 123

Code: Select all

update tab set col1 = col2 key = 123

Can you show me a scenario when it really is true for any mainstream DBMS? Would you consider it acceptable for a reasonably implemented database engine? Would you then agree that it would essentially forbid virtually any type of otherwise equivalent query transfomations?

Anyway, what are you trying to say is that the view of the database sometimes might be different for queries and updates. Yes, I know it from the recent and quite surprizing experience with Oracle, but I found nothing in the documentation which confirms it. Please find me any peace of information which says otherwise.

You've lost me here again. Firstly, we've been discussing all the time the READ COMMITTED environment, why all of a sudden we are switching to the SERIALIZABLE ? Secondly, I've not claimed anything at all about "the results being perfectly consistent", rather I've been patiently trying to explain what the expression "selecting a consistent set of rows as of some point in time" means and how an update obtaines such a set. Thirdly, several reasonable solutions were offered for the problem you presented (SERIALIZABLE; SELECT FOR UPDATE).

Let me decipher my previous message once it obviously needs some more explanations:

1. According to the docs, with respect to the observed database state a single statement read committed transaction is no different from the same statement executed under serializable transaction, except serializable might need to be restarted manually on write conflicts. Therefore, for any data produced by any single-statement read committed transaction there should be a way to execute it under serializable to create exactly the same database state, unless the documentation says otherwise.

2. If updates might transform data in such a way that the final result cannot be possibly obtained from some consistent view of the database, than I'd say, that Oracle should explicitly admit that the statement level consistency never warrants the consistent outcome of update operations. If you're saying that I mistakenly believed that you considered the results of the experiments to be 'perfectly consistent' and if you agree, that these results are inconsistent indeed, then there's nothing to argue about.

3. The solutions that have been mentioned are irrelevant. I'm trying to make sense out of Oracle claims and the observed facts of life that in my opinion are clearly conflicting.

tengiz wrote:Well, I strongly disagree. Nobody's arguing with the statement that the solution has to be correct but correctness alone is not sufficient in real life...

Now you've lost me. For the purposes of this particular discussion only I don't care about fine tuning or best design practices, I only care about promises explicitly or implicitly made by Oracle. And I'm really surprised by your reply. Do I need to attach a dozen paragraph disclaimer explaining all possible interpretations of any specific context-sensitive statement every time it might seem ambiguous or vague?

tengiz wrote:As a developer working with a database one is supposed to know this stuff, it's part of the job description, really. The database state-of-the "art" today is such that database independence is a myth. You have to design a database-backed application being fully aware of a specific DB engine pecularities, especially with respect to locking/concurrency.

It's completely irrelevant for this discussion - we are talking about way too basic and too fundamental issues than the effectiveness, expressiveness, etc. But anyway, I'm pretty sure that you're aware of such peculiarities of Oracle's engine. So, can you show me a piece of the standard documentation which makes it obvious for a reasonable applicatiopn developer that he indeed has to use serializable for any given single-statement transaction to get predictable results?
Cheers
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

tengiz wrote:...But one question still remains - the key didn't really change (pk = pk), why even bother to check whether the detail row existed?

This, I cannot answer, I do not know why Oracle decided not to optimize it

tengiz wrote:vc, any type of write activity (insert/update/delete) has to be presented with some database state which then is going to be transformed. No persistent data can be written based on anything but the previous database state, the application logic and some external data - and let me repeat: there must be only one observed database state for queries and for updates. Otherwise the following two statements could have produced different data:

Code: Select all

update tab set col1 = (select col2 from tab key = 123) where t.key = 123
update tab set col1 = col2 key = 123

Can you show me a scenario when it really is true for any mainstream DBMS? Would you consider it acceptable for a reasonably implemented database engine? Would you then agree that it would essentially forbid virtually any type of otherwise equivalent query transfomations?


Let's not over-dramatize the situation. Any 'mainstream' implementation is a crude hack of the original relational model but that's the best we can have. Of course, it would be nice to have serializable schedules in each and every case but the grim reality is such that, today, for performance reasons, it's simply impossible, and one has to do one's best with the crap like various isolation levels.

If you discovered other flaws in the Oracle implementation of the RCIL above and beyond phantoms and non-repeatables, I'll be happy to discuss them.

tengiz wrote:Anyway, what are you trying to say is that the view of the database sometimes might be different for queries and updates. Yes, I know it from the recent and quite surprizing experience with Oracle, but I found nothing in the documentation which confirms it. Please find me any peace of information which says otherwise.

I am surprised that you have been surprised. The phenomenon we have been observing is nothing more but a manifestation of phantom reads, and the documentation, unambiguosly, states that the RCIL does not protect from them.
Phantom reads happen under any locking scheduler as well, as you very well know, albeit under different circumstances. I'll be happy to supply an example for those who are unaware of it. Speaking of documentation, I quite agree that Oracle should have supplied some examples showing possible scenarious with phantom reads. However, Oracle reticence regarding the subject is hardly unique; no other database vendor, except IBM/DB2, offers this kind of information.

tengiz wrote:Let me decipher my previous message once it obviously needs some more explanations:

1. According to the docs, with respect to the observed database state a single statement read committed transaction is no different from the same statement executed under serializable transaction, except serializable might need to be restarted manually on write conflicts. Therefore, for any data produced by any single-statement read committed transaction there should be a way to execute it under serializable to create exactly the same database state, unless the documentation says otherwise.


Probably the choice of words in the documenation is really poor... Let me offer another description of what is happening: in an RC transaction each query looks for data consistent as of the point in time when the individual query started. In a serializable transaction all queries look for data consistent as of the timestamp when the FIRST query of the entire transaction started. Additionally, the SIL eliminates phantoms and non-repeatables.

tengiz wrote:2. If updates might transform data in such a way that the final result cannot be possibly obtained from some consistent view of the database, than I'd say, that Oracle should explicitly admit that the statement level consistency never warrants the consistent outcome of update operations. If you're saying that I mistakenly believed that you considered the results of the experiments to be 'perfectly consistent' and if you agree, that these results are inconsistent indeed, then there's nothing to argue about.


_No_ database provides a consistent outcome in all the possible scenarious in the READ COMMITTED isolation level. The ugly phenomena, like phantoms and non-repeatables, simply show up under different circumstances. As I said before, what provides consistency is the real SERIALIZABLE. But today's implementation thereof is so unperformant as to be impractical, therefore, a hypothetical developer has to use the RCIL re-enforced with, say, SELECT FOR UPDATE in Oracle's case or various locking hints in the case of the locking scheduler. Sometimes one can use other tools like Oracle's Snapshot IL to solve concurrency issues.

Oracle, by the way, used to have an initialization parameter (SERIALIZABLE=true/false) that has been deprecated in the current releases but still can be used (the documentation does not describe it). With this parameter set to 'true', Oracle behaved in a manner similar to Sybase's Serializable when a whole table was locked during any update. I believe it was deprecated because it's never enjoyed much popularity...

tengiz wrote:3. The solutions that have been mentioned are irrelevant. I'm trying to make sense out of Oracle claims and the observed facts of life that in my opinion are clearly conflicting.

I suggest we stick to the facts of life ;)

tengiz wrote:Now you've lost me. For the purposes of this particular discussion only I don't care about fine tuning or best design practices, I only care about promises explicitly or implicitly made by Oracle. And I'm really surprised by your reply. Do I need to attach a dozen paragraph disclaimer explaining all possible interpretations of any specific context-sensitive statement every time it might seem ambiguous or vague?


No, of course, you needn't, we can always eliminate any ambiguity through a friendly dialog ;) I merely wanted to stress the fact that performance is usually much more important a factor in database design than one might imagine.

tengiz wrote: So, can you show me a piece of the standard documentation which makes it obvious for a reasonable applicatiopn developer that he indeed has to use serializable for any given single-statement transaction to get predictable results?

Such a document does not exist as that would be too harsh a requirement. Database application design, as you undoubtedly know, is based on a set of various tradeoffs. In some circumstances, the RCIL alone is good enough per se, sometimes it has to be 'enhanced' with optimistic/pessimistic locking, sometimes, as I said, one can use other tools like Oracle's Serializable.

P.S. When I was writing my response, it occured to me that the example with which you started the discussion is prone to dead-locks if one assumes random transaction arrival:

Code: Select all

Session 1:
drop table fruits;
create table fruits
(
  bin int primary key,
  amount int default (0) not null,
  pending int default (0) not null,
  cleared datetime default (getdate()) null
)
go

  insert into fruits (bin, amount, pending, cleared) values (1, 10, -2, null);
  insert into fruits (bin, amount, pending, cleared) values (2,  0,  2, null);
  insert into fruits (bin, amount, pending) values (3,  0,  0);
  commit;
go
select * from fruits;
go

 
update fruits set pending = pending + 1, cleared = null where bin = 3;
go

Session 2:
update fruits
set amount = amount + pending, pending = 0, cleared = getdate()
where cleared is null;
select * from fruits;
go

Session 1:
update fruits set pending = pending - 1, cleared = null where bin = 1;
go
Msg 1205, Level 13, State 50, Server VJC, Line 1
Transaction (Process ID 51) was deadlocked on {lock} resources with another
process and has been chosen as the deadlock victim. Rerun the transaction.


I described the situation when Row 3 is updated first. In your original example Row 1 was updated first.

So assuming a random transaction ordering, there is a bug ); in the code or the order is implied. Still, in the latter case our hypothetical developer will be as puzzled as with Oracle's phantoms whilst trying to figure out why ordering matters.

Rgds.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc,

We'd been sidetracked a lot by irrelevant nonsense. Let's not do it anymore and let's get right to the point at last. Here's the facts of life as can I see it:

1. Oracle makes rather bold claims to provide 'statement-level transaction set consistency' for read committed. This indeed is one of the biggest and coolest selling points for Oracle as far as application developers are concerned. We found nothing in the documentation that says otherwise.

2. In many cases Oracle behaves as if the above set consistency is preserved for any kind of single-statement transaction, not just read-only. In order to do so, it even goes so far and silently rollbacks and then restarts the statement just to make it act like if it was serializable with proper automatic write conflict resolution. As a result, an outcome of any such case looks as if the query was presented with 'transaction set consistent' view of the database indeed.

3. But in some other cases, it doesn't. If I understand your explanation correctly - it's the read committed isolation level susceptible to isolation anomalies, therefore it's perfectly fine not to produce 100% transaction set consistent results as long as the observed anomalies are allowed to exist in ANSI read committed.

Now let's forget what has been claimed in the documentation and let me ask you a couple question I should have asked much earlier, but I really couldn't because only now we're on the same page at last, I hope:

1. If (3) is acceptable for read committed, then why even bother with (2)? The worst thing that could have happened it precisely the same isolation anomalies. Therefore, arguments similar the ones in (3) apply in (2) and there is no need to restart at all!

2. If, on the other hand, Oracle believes that providing consistent writes is too important to distinguish Oracle from the other mainstream DBMS's and to give Oracle another undisputable advantage over them, then why not to fix the case (3)? It only takes a simple tweak in the restart algorithm - instead of checking predicate and index column changes, it needs to check the entire set of relevant columns, i.e. all the columns mentioned in the query in question only for the table being updated. Also, as we have seen it sometimes does redundant checks causing restarts when it isn’t necessary.

If I were one of the Oracle core engine devs, I would definitely go for it. It’s very cheap in terms of dev work, but it gives rather cool benefits. As of the performance concerns – the fix in fact would degrade the performance in some cases, but improve in others. Therefore, it seems like a fair trade-off with the additional bonus of being 100% correct all the time for single-statement transactions.

I can see two possible explanations why neither documentation has been fixed nor the algorithm has been improved:

1. Rather unlikely, but theoretically possible: they’re not aware of the problem – there is this annoying bug in the restart algorithm and for some stupid reason nobody brought it to the attention of the right people.

2. They’re very well aware of the problem, but they won’t fix it because of some non-technical reason, like some otherwise award-winning critical benchmark would get screwed if Oracle would restart some update query in some hot transaction whereas for that particular update ANSI read committed is more than enough to produce the correct results. Or some other similar marketing b/s.

That’s all. I have nothing more to add. If you have any questions, comments – you’re more welcome, as always. But please, let’s not get sidetracked again.
Cheers
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

Tengiz,

tengiz wrote:vc,

We'd been sidetracked a lot by irrelevant nonsense. Let's not do it anymore and let's get right to the point at last. Here's the facts of life as can I see it:

1. Oracle makes rather bold claims to provide 'statement-level transaction set consistency' for read committed. This indeed is one of the biggest and coolest selling points for Oracle as far as application developers are concerned. We found nothing in the documentation that says otherwise.

2. In many cases Oracle behaves as if the above set consistency is preserved for any kind of single-statement transaction, not just read-only. In order to do so, it even goes so far and silently rollbacks and then restarts the statement just to make it act like if it was serializable with proper automatic write conflict resolution. As a result, an outcome of any such case looks as if the query was presented with 'transaction set consistent' view of the database indeed.

3. But in some other cases, it doesn't. If I understand your explanation correctly - it's the read committed isolation level susceptible to isolation anomalies, therefore it's perfectly fine not to produce 100% transaction set consistent results as long as the observed anomalies are allowed to exist in ANSI read committed.

Now let's forget what has been claimed in the documentation and let me ask you a couple question I should have asked much earlier, but I really couldn't because only now we're on the same page at last, I hope:

1. If (3) is acceptable for read committed, then why even bother with (2)? The worst thing that could have happened it precisely the same isolation anomalies. Therefore, arguments similar the ones in (3) apply in (2) and there is no need to restart at all!


Since the reason for this behaviour is not described anywhere, what follows is just my speculation.

I think Oracle acts like that in order to prevent the following anomalies:

1. aggregation query anomaly. Say, we have a query like that: select sum(x) where x>0. If Oracle did not restart, it would produce a result that does not correspond to any committed state.

2. update anomaly. Say, we want to add interest to the principal:
update t1 set x=1.05*x where <predicate_already_updated=false>. Again, if Oracle does not restart, the record may get updated two or more times.

3. update/delete anomaly. Whilst Transaction A is trying to update/delete a row, Transaction B has already deleted the same row. Perhaps the cheapest way to resolve the conflict would be to restart.

(1) and (2) can be explained away by reciting the RCIL verse (as I did for Oracle phantoms). Probably the real reason is (3), I just don't know for sure.

tengiz wrote:2. If, on the other hand, Oracle believes that providing consistent writes is too important to distinguish Oracle from the other mainstream DBMS's and to give Oracle another undisputable advantage over them, then why not to fix the case (3)? It only takes a simple tweak in the restart algorithm - instead of checking predicate and index column changes, it needs to check the entire set of relevant columns, i.e. all the columns mentioned in the query in question only for the table being updated. Also, as we have seen it sometimes does redundant checks causing restarts when it isn’t necessary.

If I were one of the Oracle core engine devs, I would definitely go for it. It’s very cheap in terms of dev work, but it gives rather cool benefits. As of the performance concerns – the fix in fact would degrade the performance in some cases, but improve in others. Therefore, it seems like a fair trade-off with the additional bonus of being 100% correct all the time for single-statement transactions.

I can see two possible explanations why neither documentation has been fixed nor the algorithm has been improved:

1. Rather unlikely, but theoretically possible: they’re not aware of the problem – there is this annoying bug in the restart algorithm and for some stupid reason nobody brought it to the attention of the right people.

2. They’re very well aware of the problem, but they won’t fix it because of some non-technical reason, like some otherwise award-winning critical benchmark would get screwed if Oracle would restart some update query in some hot transaction whereas for that particular update ANSI read committed is more than enough to produce the correct results. Or some other similar marketing b/s.


I am almost ready to agree that the reason is #2.

tengiz wrote:That’s all. I have nothing more to add. If you have any questions, comments – you’re more welcome, as always. But please, let’s not get sidetracked again.


Actually, I think we've covered it all. I do not have much to contribute in the absence of any additional information for this kind of behaviour.

Thanks. It's been a pleasure as usual and I do not mind being sidetracked at all ;).

Rgds.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

vc, just a couple of comments:
vc wrote:I think Oracle acts like that in order to prevent the following anomalies:...

Let's assume that Oracle never intended to honor the statement-level transaction set consistency for updates promise and it's the documentation that is screwed, not the engine. Then, I'd say that the simplest way to prevent the anomalies you've described for read committed transaction would be to proceed (with writes only!) based on the committed state of the data as of now. Just as any normal locking scheduler would do. It would perfectly satisfy ANSI read committed. At least I don't see any real schedule which wouldn't. There's no need to restart at all.

I am almost ready to agree that the reason is #2.

Good. It means that we've definitely nailed down the problem - it's either a documentation bug, which is highly unlikely, or, on the other hand, very much likely a 'feature' well-know by Oracle, which, unfortunately, has not been fixed so far for non-technical reasons.
Cheers
User avatar
Dmitry67
Уже с Приветом
Posts: 28294
Joined: 29 Aug 2000 09:01
Location: SPB --> Gloucester, MA, US --> SPB --> Paris

Post by Dmitry67 »

tengiz wrote:2. They’re very well aware of the problem, but they won’t fix it because of some non-technical reason, like some otherwise award-winning critical benchmark would get screwed if Oracle would restart some update query in some hot transaction whereas for that particular update ANSI read committed is more than enough to produce the correct results. Or some other similar marketing b/s.


Why not implement a

AWARD_WINNING_CRITICAL_BENCHMARK_MODE=ON/OFF ?
:)
Зарегистрированный нацпредатель, удостоверение N 19719876044787 от 22.09.2014
User avatar
JustMax
Уже с Приветом
Posts: 1476
Joined: 05 Dec 2000 10:01
Location: Vilnius -> Bonn

Post by JustMax »

Dmitry67 wrote:
tengiz wrote:AWARD_WINNING_CRITICAL_BENCHMARK_MODE=ON/OFF ?
:)


Или еше лучше :

TENGIZ_IL_BEHAVIOUR=ON/OFF

:mrgreen:
vc
Уже с Приветом
Posts: 664
Joined: 05 Jun 2002 01:11

Post by vc »

tengiz wrote:vc, just a couple of comments:


Tengiz,

Please check you personal messages. I've sent you one about how to get more information on the stuff we've been discussing.

Rgds.
User avatar
tengiz
Уже с Приветом
Posts: 4468
Joined: 21 Sep 2000 09:01
Location: Sammamish, WA

Post by tengiz »

А вот, кстати, наблюдаемая аномалия в чистом академическом виде:

Code: Select all

--session 1
--
drop table t;
create table t (
   pk int primary key,
   data int null
);
insert into t values (1, 1);
insert into t values (2, 2);
commit;
select * from t;
update t set data = data + 1;
select * from t;

--session 2
--
update t t1 set data = data + (select data from t t2 where pk = 2) where pk = 1;
select * from t;
commit;

--session 1
--
commit;

Это - "read skew" или в эквивалентной одноверсионной истории - "r1[x] w2[x]w2[y]c2 r1[y]...c1". Всё, что обсуждалось в дискуссии, это разные формы этого самого явления. Если формально рассматривать определения уровней изоляции ANSI, то "read skew" не должно наблюдаться на уровнях выше ANSI read committed. Поэтому строка из документации Oracle, где они говорят, что:
Because an Oracle query only sees data that was committed at the beginning of the query (the snapshot time), Oracle actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.

Без оговорки, в каких конкретно ситуациях это на самом деле верно, не является корректной.
Cheers

Return to “Вопросы и новости IT”