Data lock control mode. Controlled locking mechanism. Physical implementation of locks in the database

The 1C:Enterprise system allows you to use two modes of working with the database: the mode of automatic locks in a transaction and the mode of controlled locks in a transaction.

The fundamental difference between these modes is as follows. The automatic locking mode does not require the developer to take any action to manage locks in a transaction. These rules are ensured by the 1C:Enterprise system platform through the use of certain levels of transaction isolation in a particular DBMS. This mode of operation is the simplest for the developer, however, in some cases (for example, with intensive simultaneous work of a large number of users), the transaction isolation level used in the DBMS cannot provide sufficient parallelism, which manifests itself in the form of a large number of locking conflicts when users work.

When operating in managed locking mode, the 1C:Enterprise system uses a much lower level of transaction isolation in the DBMS, which can significantly increase the concurrency of users of the application solution. However, unlike the automatic locking mode, this level of transaction isolation can no longer by itself ensure compliance with all rules for working with data in a transaction. Therefore, when working in managed mode, the developer is required to independently manage the locks set in the transaction.

In summary, the differences when working in automatic blocking mode and in controlled blocking mode are shown in the following table:

(20.59 kilobytes) Number of downloads: 79

Setting the blocking mode in the configuration

The configuration has the Data Lock Control Mode property. Each configuration application object also has a Data Lock Control Mode property.
The data lock control mode for the entire configuration can be set to Automatic, Managed (the default for a new configuration), or Automatic and Managed. The Automatic and Managed values ​​mean that the corresponding blocking mode will be used for all configuration objects, regardless of the values ​​​​set for each of the objects. The Automatic and Managed value means that for a particular configuration object the mode specified in its Data Locking Control Mode property will be used: Automatic or Managed.
It should be noted that the data locking control mode specified for a metadata object is set for those transactions that are initiated by the 1C:Enterprise system when working with the data of this object (for example, when modifying the object data).
If, for example, the operation of writing an object is performed in a transaction initiated by the developer (the StartTransaction() method), then the data locking control mode will be determined by the value of the Locking Mode parameter
method StartTransaction(), and not the value of the Data Lock Control Mode metadata object property.
By default, the Blocking Mode parameter is set to Data Blocking Control Mode. Automatic, so for
In order to use managed locking mode in an explicit transaction, you must specify the value of this parameter
Data Lock Control Mode. Managed.

Working with managed locks using the built-in language

To manage locks in a transaction, the built-in language object DataLock is used. An instance of this object can be created using a constructor and allows you to describe the required lock spaces and lock modes. To set all created locks, use the Lock() method of the DataLock object. If this method is executed in a transaction (explicit or implicit), locks are acquired and will be released automatically when the transaction ends. If the Lock() method is executed outside of a transaction, no locks will be acquired.

Conditions are set for the field value to be equal to the specified value or for the field value to be within the specified range.
Conditions can be set in two ways:

  • by explicitly specifying the field name and value (SetValue() method of the DataLockElement object);
  • by specifying a data source containing the required values ​​(DataSource property of the DataLockElement object).

For each blocking element, one of two blocking modes can be set:

  • shared,
  • exceptional.

The managed locking compatibility table looks like this:

(4.14 kilobytes) Number of downloads: 65

Shared locking mode means that locked data cannot be modified by another transaction until the end of the current transaction.
Exclusive locking means that locked data cannot be modified by another transaction until the end of the current transaction, nor can it be read by another transaction that holds a shared lock on the data.

Features of operation in the “Automatic and controlled” mode

When working in Automatic and Controlled blocking control mode, two features should be taken into account:

  • Regardless of the mode specified for a given transaction, the system will install the appropriate managed
  • blocking.
  • The lock control mode is determined by the highest-level transaction. In other words, if another transaction was started at the time the transaction started, then the started transaction can only be executed in the mode that is set for the already running transaction.

Let's consider the listed features in more detail.
The first feature is that even if the automatic lock management mode is used for a transaction, the system will additionally install corresponding managed locks when writing data in this transaction. It follows that transactions executed in managed locking mode may conflict with transactions
executed in automatic lock control mode.
The second feature is that the lock management mode specified for a metadata object in the configuration or specified explicitly when starting a transaction (as a parameter to the StartTransaction() method) is only a “desired” mode. The actual lock management mode in which the transaction will be executed depends on whether this is the first call to start a transaction, or whether another transaction has already started in this session of the 1C:Enterprise system at that moment.
For example, if you need to manage locks when writing sets of register records when posting a document, then the managed locking mode must be set both for the register itself and for the document, since writing sets of register records will be performed in the transaction opened when writing the document.

Today we will talk about blocking both at the 1C 8.3 and 8.2 level, and at the DBMS level. Data blocking is a mandatory element of any system with more than one user.

Below I will describe how blocking works and what types they are.

A lock is information that a system resource has been seized by another user. There is an opinion that blocking is a mistake. No, locking is an inevitable measure in a multi-user system to share resources.

Only redundant (“unnecessary”) locks can cause harm to the system; these are the locks that block unnecessary information. You need to learn how to eliminate such blockages; they can lead to suboptimal system operation.

Locks in 1C are conventionally divided into object and transactional.

Objective ones are, in turn, optimistic and pessimistic. And transactional ones can be divided into managed and automatic.

Object locks 1C

This type of locking is completely implemented at the 1C platform level and does not affect the DBMS in any way.

Get 267 video lessons on 1C for free:

Pessimistic locks

This blocking is triggered when one user has changed something in the directory form, and the second is trying to change an object in the form in the same way.

Optimistic locks

This lock compares versions of an object: if two users opened a form, and one of them changed and wrote down the object, then when writing to the second, the system will give an error that the versions of the objects are different.

Transactional locks 1C

The 1C transactional locking mechanism is much more interesting and more functional than the object locking mechanism. This mechanism actively involves locking at the DBMS level.

Incorrect operation of transaction locks can lead to the following problems:

  • lost change problem;
  • dirty reading problem;
  • uniqueness of reading;
  • reading phantoms.

These problems were discussed in detail in the article about.

Automatic transaction locks 1C and DBMS

In automatic mode, the DBMS is entirely responsible for locking. The developer in this case is absolutely not involved in the process. This makes the work of a 1C programmer easier, but creating an information system for a large number of users with automatic locks is undesirable (especially for PostgreSQL and Oracle BD DBMSs - when modifying data, they completely lock the table).

For different DBMSs, different degrees of isolation are used in automatic mode:

  • SERIALIZABLE for the entire table – file mode 1C, Oracle;
  • SERIALIZABLE on records – MS SQL, IBM DB2 when working with non-objective entities;
  • REPEATABLE READ on records – MS SQL, IBM DB2 when working with object entities.

Managed mode of transactional locks 1C and DBMS

The developer of the application solution at the 1C level takes full responsibility. In this case, the DBMS sets a fairly high level of isolation for transactions - READ COMMITED (SERIALIZABLE for a file DBMS).

When performing any operation with the database, the 1C lock manager analyzes the possibility of blocking (seizing) a resource. Locks of the same user are always compatible.

Two locks are NOT compatible if: they are installed by different users, they are of incompatible types (exclusive/shared), and they are installed on the same resource.

Physical implementation of locks in a DBMS

Physically, locks are a table that is located in the database called master. The lock table itself is named syslockinfo.

The table conventionally has four fields:

  1. Blocking session ID SPID;
  2. what exactly is blocked by RES ID;
  3. lock type - S,U or X MODE(in fact, there are 22 types in MS SQL, but only three are used in conjunction with 1C);
  4. blocking state - can take a value GRANT(installed) and WAIT(waiting his turn).

The main reasons for switching to managed locks:

  • The main reason is the recommendation of 1C:Expert based on testimony or 1C:TsUP
  • Problems with concurrent users ()
  • Using Oracle, PostgreSQL and .

Cost of work:

The essence of managed locks

When working in automatic locking control mode, 1C:Enterprise sets a high degree of data isolation in a transaction at the DBMS level. This allows you to completely eliminate the possibility of obtaining incomplete or incorrect data without any special efforts on the part of application developers.

This is a convenient and correct approach for a small number of active users. The price of ease of development is a certain amount of redundant locking at the DBMS level. These locks are associated both with the peculiarities of the implementation of locking mechanisms in the DBMS itself, and with the fact that the DBMS cannot (and does not) take into account the physical meaning and structure of 1C:Enterprise metadata objects.

When working with high contention for resources (a large number of users), at some point the impact of redundant locks becomes noticeable in terms of performance in parallel mode.

After transferring the configuration to managed mode, the additional functionality of the “lock manager” is activated in the platform and data integrity control is now carried out not on the DBMS side, but on the 1C server side. This increases the load on the 1C server hardware (faster processors and more memory are needed), and actually introduces even a slight slowdown (several percent), but it significantly improves the situation with locks (fewer locks due to locks on an object, and not on a combination of tables, less blocking area and in some cases the lifetime of read locks is shorter, i.e. not until the end of the transaction). This improves overall concurrency.


New configurations from 1C were implemented immediately in a controlled mode.

  • Question: Is it possible to do an audit first and then transfer to FM?

Answer: Yes, the audit will serve as an additional justification for the feasibility of switching to managed locks and also to evaluate the contribution of automatic locks to the overall slowdown and whether additional efforts are needed besides the transfer.

  • Question: To transfer to UX, what kind of access should be provided - RDP, TeamViewer? Or can I send you the configuration file?

Answer: We try not to limit ourselves to one specific remote access technology, it will do any remote access technology. If it doesn't matter to you, then RDP is more practical.
We can perform optimization based on the sent configuration file, but then we will not be able to debug some real data and you will have to test more carefully. If we perform optimization on a copy of the database, we can test it more thoroughly before we give you the result of the work.

  • Question: We have 10 full-time programmers who change something in the conference every day. A shared configuration store is used." How will interaction be organized during the transfer to UX? Or should all programmers be sent on vacation?

Answer: As a rule, our changes are made within a couple of days. The rest of the time is spent testing the changes made, including from the point of view of the required logic determined by business and not by technical considerations. We we can make changes to a separate configuration file cf , and then your programmer will commit it to the repository. No one will have to go on vacation. In other options for interaction, you just need to agree on which objects your developers plan to capture, so that we can build a work plan that is convenient for both parties. As a rule, your developers do not need to capture the entire configuration, or give us the “steering wheel” for the day.

When working in multi-user mode in 1C, data locks are a necessary mechanism. They protect against situations similar to two managers simultaneously selling the same product to different clients. The 1C platform provides two types of blocking - controlled and automatic. The first of the locking modes in 1C is optimal for highly loaded systems with a large number of users. Let's take a closer look at it.

Features of the controlled locking mode

Unlike automatic, managed mode allows the 1C system to use its own lock manager and apply less stringent DBMS rules. That is, the built-in mechanism allows you to take into account the business logic of the application and more smoothly and accurately sets restrictions on reading and writing data. Changing the locking mode can give a significant performance boost and reduce the number of transaction locking errors. This happens due to an additional check by the lock manager for compliance with the restrictions established within the system before transmitting the request to the DBMS.

A significant disadvantage is that the developer has to independently control the consistency of data when entering and processing it. It is likely that after enabling managed locking mode, you will have to write a lot of checks to achieve the previous level of security. Despite this, many companies choose to switch to a managed mode if their capabilities allow it.

When developing software checks and restrictions, it is important to remember the peculiarity of managed locks - any of them lasts until the end of the transaction. It follows that programmers should set the lock closer to the end of the transaction so that the likelihood of waiting is minimal. If you need to make calculations and write down their results, then it is more correct to register the blocking after the calculations.

Another common problem with blocking in 1C is importing documents. Many developers use a fairly simple solution - when loading, do not upload documents, but only create them. And then, using a simple mechanism, process all the loaded data in a multi-threaded mode according to key characteristics - items, partners or warehouses.

The algorithm for switching to managed 1C locks looks simple, but an unqualified 1C administrator can make mistakes that will be difficult to correct. Most often there are problems with excessive or insufficient levels of blocking. In the first case, problems will arise with the performance of the system, up to emergency stops of the server cluster. Insufficient locking is dangerous due to accounting errors when users work at the same time.

Switching to Managed Mode

Despite the fact that the complete algorithm for switching to the controlled blocking mode will be presented below, it must be performed by an experienced specialist. If you do not understand the principles of operation of the locking mechanism in 1C and DBMS, then it is unlikely that you will be able to write restrictions correctly. But this applies to complex configurations. For simple configurations, novice developers can successfully complete the mode switching process and gain experience:

  • The first step is to change the data lock control mode for the configuration. To do this, open the configuration tree in the configurator and change the mode in the properties of the root element in the “Compatibility” section. Select "Automatic and Managed" to avoid errors before all objects are transferred to the new mode;
  • Now it's time for the documents. After all, it is with their help that we register all the events that need to be controlled. You need to start transferring to 1C managed locks with the most loaded documents. On the “Other” tab, select the “Managed” blocking mode;
  • We find all registers associated with an already processed document and transfer them to controlled mode using a method similar to documents;
  • The next step involves finding and modifying all transactions with modified objects. This includes explicit changes that include the “StartTransaction()” keywords, as well as all document and register handlers that include transactions;
StartTransaction() For Each DocumentToDelete FROM the List of Documents LoopDocumentObject = DocumentToDelete.GetObject(); TryDocumentObject.SetDeletionMark(True); Exception Failure = True; CancelTransaction(); Report("Could not delete document" + DocumentObject); Abort; EndAttempt; EndCycle; CommitTransaction();
  • Eliminate the "FOR CHANGE" query language operator. You can replace it with the “Data Locking” object, with the need to change the request and the algorithm for calling and processing it.

The last two stages are the most complex and require qualifications from the developer, but they are the guarantors of maintaining the working state of accounting in the system.

For implementers who work with standard or their own configurations - and those who are preparing for Certification at 1C: Platform Specialist.

In this article we will look at:

  • How use managed locks correctly for operative and non-operative processing of documents
  • what can it lead to no blocking
  • how to avoid making mistakes that are not immediately discovered and can have serious consequences :)

Reading time: 20 minutes.

So, two methods for controlling balances in 1C:Enterprise 8.3

Let's start with the fact that the designations “old method” and “new method” are quite arbitrary. In fact, if a “new technique” has been used since 2010, it is no longer very new :)

However, we are once again forced to stop here, because it is necessary to distinguish between these approaches and this is critical.

The “old method” is an approach to controlling residues that has been used since the days of 1C:Enterprise 8.0.

Since 2010, with the development of the platform and the addition of new capabilities with 1C:Enterprise 8.2, a “new methodology” has been applied ( however - not everywhere).

What is the difference?

The fundamental difference is in the moment of control of residues:

  • In the “old” method, balances are controlled BEFORE recording movements in registers.
    First, we check the balances; if the balances are “not enough” (negative balances will arise), we will not post the document
  • In the “new” method, control occurs AFTER recording movements, that is, after the fact.
    If after execution negative balances are formed, you need to “roll back” the transaction, that is, cancel the document.

The advantages and disadvantages of the new technique are discussed in detail in a separate article, so we will limit ourselves to only the general thesis - the new technique is more optimal in terms of performance and scalability.

Ok, so the old technique is a thing of the past and this is the destiny of UT 10.3?

No, that's not entirely true.

The new methodology can be used when, when writing off goods all the necessary data is in the document and does not need to be calculated.

For example, when the amount to be written off is known from the tabular part of the document. The problem arises with the cost price, because it needs to be calculated before writing to the register, that is, executing a query to the database.

Therefore, the new methodology can be successfully applied if data on quantity and cost are stored in separate registers.

For example, like this:

However, there are configurations where both quantity and cost are taken into account on the same register. And here it is justified the old method of residue control still works!

Here is an example of one register for both quantity and cost:

What about typical configurations? It's just a new technique, right?

Not always!

For example, in “1C: Trade Management 11.3” there are 2 registers:

When posting shipping documents, the “Cost of Goods” register is not filled in at all. Data enters this register only when performing routine operations to close the month.

UT 11 uses a new technique, since all data for posting documents can be obtained without accessing controlled registers.

As for “1C: Accounting”, both quantity and cost are stored there in one register accounting department, on the corresponding accounting accounts.

That's why BP 3.0 uses the old technique.

Please note that quantitative and cost accounting in UT 11 are carried out with different analytics. For example, cost is additionally maintained by organization, division, manager, type of activity, VAT, and so on.

As part of this article, we will analyze blocking for both the old and new methods of controlling balances.

About the prompt processing of documents

There are often misconceptions about this simple question.

Sometimes it is believed that operational implementation is the control of residues using a new method. This is not true, from the word “at all”.

Operational performance can be analyzed while monitoring residues, but is not necessary.

Operational implementation– this is the document’s ability to record emerging events here and now, that is, in real time.

It is configured using a special document property:

What does it mean to “register here and now”? The platform for quickly processed documents performs a number of actions:

  • Documents posted today are assigned the current time
  • If two documents are posted simultaneously, each will have its own time (that is, the system will space the documents in different seconds)
  • Documents cannot be posted on a future date.

But the main thing is that the system transmits a sign of efficiency document for processing:

For promptly posted documents, you can omit specifying the parameter in the request; current balances will be obtained as of December 31, 3999:

Current balances are stored in the system and are obtained as quickly as possible (balances for other dates in most cases are obtained by calculation).

Thus operational implementation can be adopted for both the old and new methods of residue control.

Interesting fact.

In UT 11, documents writing off nomenclature are prohibited from being carried out promptly. For example, these are documents “Sales of goods and services”, “Assembly of goods”, “Movement of goods”, “Domestic consumption of goods” and others.

Why is this done?

In the system, balance control is always performed at the current point in time (the Period parameter is not specified in the request). And the lack of operational execution allows you to enter documents in the future, such a task is often required by clients.

Control of balances using a new method - without blocking

Let us briefly consider the algorithm for controlling balances when carrying out the document “Sales of goods and services” using a model configuration.

There are two registers:

  • Available balances – for quantitative accounting
  • Cost of goods – for cost accounting

To control product balances, it is enough to work with the “Free balances” register.

The posting processing code will look like this:

Request = New Request;


#Area Area1
Query.TemporaryTableManager = NewTemporaryTableManager;
#EndArea


#Area Area2
Request.Text =
"CHOOSE

|Place GoodsDocument
|FROM
|WHERE
|
|GROUP BY
|
|INDEX BY
| Nomenclature
|;

|SELECT
| &Date AS Period,
| VALUE(Type of MovementAccumulation.Expense) AS Type of Movement,
| ProductsDocument.Quantity AS Quantity
|FROM
";
Query.SetParameter("Date", Date);
#EndArea

// 4. Recording movements in the database
#Area Area4
Movements.Record();
#EndArea


#RegionRegion5
Request.Text =
"CHOOSE
| -FreeRemainingRemaining.QuantityRemaining AS Deficit
|FROM
| ProductsDocument HOW TO ProductsDocument
| INNER JOIN RegisterAccumulations.FreeRemains.Remains(
| &Moment of time,
| Nomenclature B
| (CHOOSE
| ProductsDocument.Nomenclature AS Nomenclature
| FROM
| Products of the Document AS Products of the Document)) AS Free Remaining Remaining
| Software ProductsDocument.Nomenclature = AvailableRemainingRemaining.Nomenclature
|WHERE
| AvailableRemainingRemaining.QuantityRemaining< 0";
#EndArea


#RegionRegion6
Moment of Remaining Control =
?(Mode = Document Holding Mode. Operational,
Undefined,
New Boundary(TimePoint(), BoundaryView.Including));
Request.SetParameter("Moment of Time", Moment of Remaining Control);
RequestResult = Request.Execute();
#EndArea


#RegionRegion7
If NOT Query Result.Empty() Then
Refuse = True;
ErrorSelect = QueryResult.Select();
While SelectErrors.Next() Loop
Message.Text = "Not enough product in quantity: "+SelectionErrors.Shortage;
Message.Field = "Products["+(ErrorSelection.LineNumber-1)+"].Quantity";
Message.Message();
EndCycle;
endIf;
#EndArea


#Region Region8
If Failure Then
Return;
endIf;
#EndArea

End of Procedure

Let's consider the key points of the residual control algorithm.

1. Initializing the temporary table manager

The manager will be needed so that the temporary table created in the query is available in subsequent queries.

Thus, the data from the tabular part is obtained once, saved in a temporary table and then used repeatedly.

2. Query grouping tabular data

The query selects grouped data from the tabular section.

Please note that the document line number is also selected - it will be needed to contextualize the error message. For the line number, the aggregate function MINIMUM() is used - that is, the message will be tied to the first line where the specified nomenclature occurs.

The first request in the batch creates a temporary table. In the second query, temporary table data is selected and 2 fields required for each register entry are added - Period and Movement Type.

The advantages of this approach:

  • There is no need to perform pre-cleanup, that is, use the Clear() method
  • There is no need to organize a loop based on the selection or tabular part.

By the way, a similar approach is used in standard configurations, in particular in UT 11 and BP 3.0.

4. Recording movements in the database

The recording could be performed with one command (instead of two) - Movements.FreeRemains.Record().

And in our case, when one register is written, there will be no difference.

But this approach is more universal:

  • First, set the Write flag for the required sets of register records
  • Then call the Write() method of the Movement collection, which writes all sets with the Write flag set to the database

After executing the “Movements.Record()” command, the Record flag for all sets will be reset to False.

You also need to remember that at the end of the transaction (after Post Processing), the system will automatically write to the database only those sets of records for which the Write flag is set to True.

Typical solutions use a similar scheme to record movements. Why?

The Write() method of the Movement collection writes sets of records in the same sequence, even for different documents.

Recording movements manually can lead to problems.

Let's give an example.

If you write in the “Implementation” document like this:

Movements.FreeRemainders.Write();
...
Movements.Cost of Goods.Write();

And in the document “Movement of goods” change the order:

Movements. Cost of Items.Write();
...
Movements. FreeRemainings.Write();

This can lead to deadlocking of documents on intersecting sets of items.

The above motion recording approach can be used if the appropriate motion recording value is specified in the document properties:

5. Query receiving negative balances

The request selects negative balances by item from the document.

A negative balance is a shortage (shortage) of a product.

The connection to the items from the document is performed only to obtain the line number.

If we did not plan to link messages to document fields, the query could be greatly simplified - data would be obtained from one table (the remainder of the register).

6. Determining the point in time to control residues

This is where operational execution came in handy.

If the document is carried out promptly, then the moment for receiving balances is Undetermined, which means receiving current balances.

If this is a non-operative transaction, then we get a point in time “after” the document - to take into account the movements just made.

Let us remember that obtaining current balances is a quick operation compared to obtaining balances at an arbitrary point in time.

This is precisely the benefit of promptly executed documents.

7. If the request is not empty, it means that negative balances have been formed

In the loop, we go through all the negative remainders and display a message attached to the rows of the tabular part.

This is what the diagnostic message will look like:

8. If there are errors, then return from the event handler

If there was at least one mistake, we exit the procedure.

Since there is no point in continuing the transaction, the transaction will not be recorded anyway (and then we will develop a code for writing off batches).

Implementation of cost write-offs by batch

After checking the balances has been successful, you can begin writing off the batches.

The code for writing off by FIFO will be like this:

// I. Analysis of document date shift forward


AND NOT ThisObject.ThisNew()
And ThisObject.Conducted Then

Request = New Request;
Request.Text =
"CHOOSE
| Document.Date AS Date
|FROM
|WHERE

RequestResult = Request.Execute();
SelectDocument.Next();

Otherwise
Lie);
endIf;

End of Procedure

Procedure When Recording (Refusal)

ThisObject.AdditionalProperties.Insert("DocumentDateMovedForward",
ThisObject.Date>


endIf;

End of Procedure

ProcessingProcedure(Failure, Mode)

Request = New Request;

// 1. Initialization of the temporary table manager
#Area Area1
...
#EndArea

// 2. Query grouping table data
#Area Area2
...
#EndArea

// 4. Recording movements in the database
#Area Area4
...
#EndArea

// 5. Query receiving negative balances
#RegionRegion5
...
#EndArea

// 6. Determining the point in time to control balances
#RegionRegion6
...
#EndArea

// 7. If the request is not empty, then negative balances have been formed
#RegionRegion7
...
#EndArea

// 8. If there are errors, then return from the event handler
#Region Region8
...
#EndArea

// II. Preparing sets of records for the "Cost of goods" register
#Area AreaII

Movements.Record();
endIf;
Movements.Cost of Goods.Record = True;
#EndArea

// III. Request receiving batch balances for write-off using FIFO
#AreaAreaIII
Request.Text =
"CHOOSE
| ProductsDocument.Nomenclature AS Nomenclature,
| ProductsDocument.Line Number AS Line Number,

| Remains. Party AS Party
|FROM
| ProductsDocument HOW TO ProductsDocument
| &Moment of time,
| Nomenclature B
| (CHOOSE
| FROM

|ORDER BY
|RESULTS
| MAXIMUM(Quantity),
| SUM(QuantityRemaining)
|software
| Nomenclature";
RequestResult = Request.Execute();
#EndArea

// IV. Cycle by document nomenclature
#AreaAreaIV

// V. Get the amount to write off
//VI. Batch cycle by FIFO
While SelectionBatch.Next() And RemainingWrite>0 Loop
// VII. Check for zero balance
If SampleBatch.QuantityRemaining=0 Then
Continue;
endIf;
Movement.Period = Date;

// VIII. Calculation of quantity and amount to be written off

// IX. We will reduce the amount to be written off
EndCycle;
EndCycle;
#EndArea

End of Procedure

Let's look at the key points of the algorithm for writing off batches using FIFO.

I. Analysis of document date shift forward

Here we understand whether the date of the posted document moves forward. This information will be useful below when cleaning up movements.

To analyze the document date shift, 2 events are required:

  • Before recording– to obtain the old date of the document and check the document posting mode
  • When recording– to get a new document date

We transfer data between events through a special collection of the object – “Additional Properties”. It exists as long as the current version of the object is in memory, that is, it is available for all events during execution.

A similar technique is used in the standard “1C: Accounting 8”. But there is used one event “Before recording”.

Why is it not necessary to use “On recording” in the BP?

It's simple - shipping documents cannot be processed promptly in the accounting department. This means that the time of the document will not accept an operational stamp (if the document is re-posted on the current day), therefore both the old and new date of the document can be obtained in the “Before recording” event.

II. Preparing sets of records for the “Cost of goods” register

The movement deletion mode is set for the document – ​​“When posting is cancelled”:

Thus, there is a possibility that when reposting, we can take into account the movements of the document itself. BUT this will only happen if the date of the document is shifted forward. That is, it makes sense to clear movements only when the document date is shifted forward.

Here's an example:

  • The balance of LG monitors at the time of documents is 10 pcs.
  • A document is posted that writes off 8 pcs.
  • In the same document, the time is increased by 1 minute, let’s repeat

If old movements are not deleted, the system will report a shortage of 6 monitors, since the current document movements have already written off 8 of the 10 available monitors.

Note. Sometimes there is advice - to remove movements only during surgery.

But this is wrong: they will not take into account the situation of changing “non-operative” documents (yesterday’s and earlier ones).

That is, the problem of “shortage of 6 monitors” (see above) will in this case be solved only for documents modified today.

III. Request that receives batch balances for write-off using FIFO

In the request, we refer to the balances by batch, and at the same time we superimpose the totals by item.

At the total level, the quantity from the document is obtained - MAXIMUM(Quantity) and the balance of the batch - SUM(QuantityRemaining).

Do you think the quantity from the document may exceed the total balance of the item for all batches?

If movements in the “Free Remains” and “Cost of Goods” registers by quantity are made synchronously (both incoming and outgoing), then such a situation cannot arise. This is what we will rely on when writing off lots.

IV. Cycle by document nomenclature

Thanks to the results in the query in the outer loop, we bypass the nomenclature from the document.

V. Get the amount to write off

Let's remember how much you need to write off. Further this amount will decrease.

VI. Batch cycle by FIFO

The nested cycle will contain batches according to the current item.

VII. Check for zero balance

In general, the situation when the batch balance is zero is an error in the system data (nevertheless, such a situation is possible). The point is that in this case the sum is NOT zero (the virtual table of register balances does not return records with zero resource values).

Therefore, we decide that we will skip such erroneous games. If desired, you can issue diagnostics to the user.

VIII. Calculation of quantity and amount to be written off

The quantity to be written off is the minimum value between the remainder of the batch and what remains to be written off.

The amount is calculated by an elementary proportion.

If the entire balance of a batch is written off, the entire amount of that batch will be written off. This is 3rd grade math at a parochial school: X*Y/X = Y:)

That is, there is NO need to do additional checks (sometimes they give such advice) to ensure that the entire amount is written off. This advice even has its own name - “ the problem of pennies».

And for those who give bad advice, it makes sense to look into the “1C: Accounting 8” configuration. There (oh, horror!) there is no check that the entire batch is written off :)

Here is a screenshot of the general module “Goods Accounting”, the “Write Off Remaining Goods” method:

IX. We will reduce the amount to be written off

You need to understand how much is left to write off. To do this, subtract the quantity from the register movement.

Why are managed locks needed?

Here we come to controlled blocking.

It would seem that the algorithms presented above work like clockwork. You can test them yourself (links to database downloads at the end of the article).

But during real multi-user operation, problems will begin, and, as often happens, problems will not be detected immediately...

Let's give an example of the most typical problem when writing off an item, when 2 users almost simultaneously try to write off an item (make a sale):

In this example, two users almost simultaneously carry out the sale of goods - document No. 2 began to be carried out a little later than document 1.

Upon receipt of the balance, the system reports that the balance is 10 units, and both documents are successfully processed. The sad result is that there are minus 5 LG monitors in stock.

But at the same time, residue control works! That is, if document No. 2 is posted after the end of document No. 1, the system will not post document No. 2:

Sometimes there is a misconception - “Only 3-4 users work in my database at the same time, the probability of parallel processing of documents is zero, so you don’t have to be distracted by blocking.”

This is very dangerous reasoning.

Even two users can post documents almost simultaneously, for example, if one of them performs group posting of documents.

In addition, you cannot be immune to an increase in the number of users. If the business takes off, then new sales people, storekeepers, logisticians, and so on will be needed. Therefore, you need to immediately create solutions that will work stably in a multi-user environment.

How to solve the problem when posting documents in parallel?

The solution is simple - block LG monitors at time T1, so that other transactions cannot access the balances for this product.

Then at time T2 the system will wait for the LG monitor to be unlocked. And after this, the system will receive the current balance of goods and the write-off of goods will be completed (or not completed).

Just a few words about the classification of blocking.

There are 2 types of locks:

  • Object
  • Transactional.

To put it simply, object locks do not allow interactively change one object (directory element or document) for two users.

And transaction locks allow programmatically operate with current data when performing movements across registers.

In this article we will be interested in transactional locks, then simply locks.

When should blocking be applied?

The task of setting locks becomes relevant as soon as the database starts work more than one user.

Locks need to be placed on transactions, but when do transactions occur? That's right, the most common case is document processing.

That is, blocking must be applied when processing all documents?

In no case. It’s definitely not worth setting up locks “just in case.”. After all, locks themselves reduce the concurrency of users (system scalability).

Locks must be placed on resources (table rows) that are read and modified in transactions. For example, when carrying out documents.

In the example above, such a resource is the balance of the product. The system had to block the balance from the moment the balance data was received (T1) until the end of the transaction (T3).

Note. The transaction when posting document No. 1 begins earlier than the moment the balance is received. But for simplicity, we assume that T1 is both the start time of document processing and the moment of receipt of balances.

Example when no need to lock– carrying out the “Receipt of goods” document. In this case, there is no competition for resources (leftovers, ...), so blocking will be harmful: it will reduce the scalability of the system.

Automatic and controlled blocking

Here we will not go into theory (this is the topic of a separate article), but will only say that managed locks are more optimal.

Instead of theory, we can give proof - all modern standard configurations work on controlled interlocks.

Therefore, in our model configuration the appropriate mode will be selected:

Controlled locks in new residue control technology

We will apply a lock to the “Free balances” register and only to item items found in the document.

Moreover correct blocking option– as late as possible.

In the new method of controlling balances, this must be done before recording (or at the time of recording) movements in the “Free Balances” register, so that other transactions cannot change this shared resource.

The lock can be applied manually (programmatically) and a little later we will show how this is done.

But an additional bonus of the new balance control technology is that it only takes one line of code to lock shared resources.

You just need to set the BlockForChange property on the register entry set:

// 3.1. Locking register remainders
#Area Area3_1
Moves.FreeRemainders.BlockForChange = True;
#EndArea

// 4. Recording movements in the database
#Area Area4
Movements.FreeRemainders.Write = True;
Movements.Record();
#EndArea
...

As a result, 2 transactions will not be able to change free balances for one item.

In fact, when the property is BlockForChange does not install managed locking, it only turns off the mode of separating register totals when writing.

But for our article, the following is fundamental: the system will set a lock on the combination of data written to the register. We will look at the work of the BlockForChange property in detail in a separate article.

By the way, in the standard UT 11 it is not so easy to find the setting of the BlockForChange property for the “Free balances” register. The fact is that this is performed in the register recordset module, in the “Before writing” event.

That's all, with one line of code the correct operation of the system was ensured!

Important. We do not lock the “Cost of goods” register.

Why? Such blocking would be unnecessary (and this is a certain load on the 1C server), since movements to the “Free balances” and “Cost of goods” registers are always performed synchronously, that is, sequentially one after another.

Therefore, by blocking goods from “Free balances”, we will not allow other transactions before these goods and in the “Cost of goods” register.

But for the old residual control method, the blocking will be applied differently. First, let's look at the batch write-off algorithm for this case.

Old method of residue control

Let us remind you that the old methodology can be applied if quantity and cost are taken into account in one register.

Let this be the “Cost of goods” register:

Then the algorithm for posting the document “Sales of goods” will look like this:

// 1. Event handler "Before recording"
Procedure Before Recording (Failure, Recording Mode, Conducting Mode)

If Recording Mode = Document Recording Mode. Posting
AND NOT ThisObject.ThisNew()
And ThisObject.Conducted Then

Request = New Request;
Request.Text =
"CHOOSE
| Document.Date AS Date
|FROM
| Document. Sales of Goods and Services AS Document
|WHERE
| Document.Link = &Link";
Request.SetParameter("Link", ThisObject.Link);
RequestResult = Request.Execute();
SelectionDocument = Query Result.Select();
SelectDocument.Next();

ThisObject.AdditionalProperties.Insert("OldDocumentDate", SelectDocument.Date);

Otherwise
endIf;

End of Procedure

Procedure When Recording (Refusal)

If NOT ThisObject.AdditionalProperties.Property("DocumentDateShiftedForward") Then

ThisObject.AdditionalProperties.Insert("DocumentDateMovedForward",
ThisObject.Date>ThisObject.AdditionalProperties.OldDocumentDate);

Report(ThisObject.AdditionalProperties.DocumentDateMovedForward);
endIf;

End of Procedure

ProcessingProcedure(Failure, Mode)

// 2. Removing "old" document movements
If AdditionalProperties.DocumentDate is ShiftedForward Then
Movements.Cost of Goods.Record = True;
Movements.Product Cost.Clear();
Movements.Record();
endIf;

// 3. Setting the flag to record movements at the end of the transaction
Movements.Cost of Goods.Record = True;

// 4. Request that receives balances by batch at the time of the document
Request = New Request;
Request.Text =
"CHOOSE
| Sales of Products. Nomenclature AS Nomenclature,
| SUM(SalesProducts.Quantity) AS Quantity,
| MINIMUM(SalesProducts.LineNumber) ASLineNumber
|Place GoodsDocument
|FROM
| Document. Sales of Goods and Services. Goods HOW to Sale Goods
|WHERE
| SalesProducts.Link = &Link
|GROUP BY
| Sales of Products. Nomenclature
|INDEX BY
| Nomenclature
|;
|////////////////////////////////////////////////////////////////////////////////
|SELECT
| ProductsDocument.Nomenclature AS Nomenclature,
| ProductsDocument.Quantity AS Quantity,
| ProductsDocument.Line Number AS Line Number,
| ISNULL(Remaining.NumberRemaining, 0) AS QuantityRemaining,
| ISNULL(Remaining.AmountRemaining, 0) AS AmountRemaining,
| Remains. Party AS Party
|FROM
| ProductsDocument HOW TO ProductsDocument
| LEFT CONNECTION Register Accumulations. Cost of Goods. Remainings(
| &Moment of time,
| Nomenclature B
| (CHOOSE
| T. Nomenclature AS Nomenclature
| FROM
| ProductsDocument AS T)) AS Leftovers
| Software ProductsDocument.Nomenclature = Remaining.Nomenclature
|ORDER BY
| Remains.Batch.Moment of Time
|RESULTS
| MAXIMUM(Quantity),
| SUM(QuantityRemaining)
|software
| RowNumber";

Request.SetParameter("TimePoint", TimePoint());
Request.SetParameter("Link", Link);

RequestResult = Request.Execute();

SelectionNomenclature = Query Result.Select(BypassQueryResult.ByGrouping);

// 5. Cycle by item - checking whether the quantity is sufficient for write-off
While SelectionNomenclature.Next() Loop

Nomenclature Deficit = SampleNomenclature.Quantity - SampleNomenclature.QuantityRemaining;

If Nomenclature Deficit>0 Then
Message = New MessageToUser;
Message.Text = "Not enough product in quantity: "+Nomenclature Deficiency;
Message.Field = "Products["+(SelectionNomenclature.LineNumber-1)+"].Quantity";
Message.SetData(ThisObject);
Message.Message();
Refuse = True;
endIf;

If Failure Then
Continue;
endIf;

// 6. Get the amount to write off
RemainingWrite = SampleNomenclature.Quantity;
SelectionBatch = SelectionNomenclature.Select();

// 7. Batch cycle
While SelectionBatch.Next() And RemainingWrite>0 Loop

Movement = Movements.Cost of Goods.AddExpense();
Movement.Period = Date;
Movement.Nomenclature = SelectionBatch.Nomenclature;
Movement.Batch = SelectionBatch.Batch;
// 9. Calculation of quantity to be written off
Movement.Quantity = Min(RemainingWrite, BatchSelection.QuantityRemaining);
// 10. Calculation of the write-off amount
Movement.Amount = Movement.Quantity*
SampleBatch.AmountRemaining/SampleBatch.QuantityRemainder;

// 11. Reduce the amount to be written off
RemainingWrite = RemainingWrite - Movement.Quantity;

EndCycle;
EndCycle;

Share with friends or save for yourself:

Loading...