The extension system (and not only) tables in MODX Revolution

currently doing a remake of one news portal on MODX Revolution. Since the attendance on the website it happens to 100,000 people a day, the question of performance here is one of the most important. Given the fact that at the moment the database contains more than 75 000 entries, with the incorrect (and even in the traditional approach to development on MODX) brakes of the website is almost guaranteed, and if the frequency of visits exceeds the query time, the server in General will fall. Here are some of the techniques used here to solve these problems I describe in this article.

the

1. The long generation of the cache.


Probably many of you know that when you upgrade MODX cache iterates over all documents and fills the resource map in the context cache. If someone does not know, in detail I wrote about it here. Although MODX since version 2.2.7 (or near that) in the settings to disable caching of the resource map (system setting cache_alias_map) this problem is solved only partially — MODX does not cache the URLs of documents, but the structure with the ID-shnik his still, going through all the documents from the database. This leads to the fact that firstly, the file cache of the context grows, and secondly, the script may not be performed for 30 seconds and the cache file will beat that may even lead to fatal mistakes and to make the site non-functional.

But even if the server is still able to pull all the documents and fill in all the cache, let's look at the comparative numbers on a single request with different settings. These figures are very relative because a lot depends on server settings and on different servers memory consumption at the same site will be different, but in comparison, these figures will give an idea about the difference States. To assess the memory consumption will call getdata processor to receive 10 articles.

So, first option: Full caching of map resources is enabled.
the size of the cache context file: 5 792 604 bytes.
Memory consumption when query: 28,25 Mb
Time: 0,06-0,1 sec.


the Option two: Full caching of the resource map is disabled (system setting cache_alias_map == false).
the size of the cache context file: 1 684 342 bytes.
Memory consumption when query: 15,5 Mb
Time: 0,03-0,06 sec.


the Option three: Completely disabled caching of the resource map patch cacheOptimizer.
the size of the cache context file: 54 945 bytes.
Memory usage when querying: 4.5 Mb
Time: 0,02-0,03 sec.


And it's only 75 000 resources. Hundreds of thousands, the difference will be much noticeable.

Of course there is and cons. For example, will not work with Wayfinder, which builds a menu based on the map data aliases. There will have to collect menus. I often use the menu processor, about which he wrote here (see 2. Replacement Wayfinder).

the

2. Poor performance because of the TV-parameters of the documents.


But this is the main and most interesting reason for writing this topic. Perhaps no single MODX developer who has not used the TVs TV-field. They solve two problems at once: 1. add custom fields to documents 2. provide different interfaces for editing depending on the field type.

But they have a serious drawback — they are all stored in one table. This adds several problems:

the 1. You cannot control the values are unique at the database level.

the 2. You cannot use different data types for different TV-fields. All the data TV-fields are contained in a single column to the value of data type mediumtext. That is, we and a larger volume of data can't use, and numeric values will be stored as lowercase (which imposes additional requirements to the query with sorting), and compare data from different columns, we are not Feng Shui and secondary keys do not configure and a lot more often unpleasant because of this.
the 3. Poor performance when fetching from multiple tables. For example, we have one document, there are several TV's-fields, of which at least 2-3 fields is almost always filled. We want to get the query from the data and documents and fields to them. We have two main options for the query is:

the 1. Just pigeonite table TV-shek.
the
$q = $modx- > newQuery("modResource");
$alias = $q- > getAlias();
$q- > leftJoin("modTemplateVarResource", "tv", "tv.contentid = {$alias}.id");
$c->select(array(
"tv.*",
"{$alias}.*",
));

But there is a serious drawback: in the result table we get C*TV number of records, where C is the number of records in the site_content, and TV is the number of records in the table site_tmplvar_contentvalues for each document separately. That is, if we have, for example, 100 records documents and 3 records TV per document (on average) then we will get 100*3 = 300 entries.

As for this reason, as a result, one document had more than one resulting record, at the level of PHP is necessary to further process the information to generate unique data. That we the getdata processor runs. And it also increases the load and increases the run time.

Now I have this news portal was the average for the 3 main entries on the document. As a result, ~225 000 records TV. Even with the optimized query execution with the conditions took 1-4 seconds which is very long.

the 2. Joint each TV field individually.
A sample request:
the
$q = $modx- > newQuery("modResource");
$alias = $q- > getAlias();
$q- > leftJoin("modTemplateVarResource", "tv1", "tv1.tmplvarid = 1 AND tv1.contentid = {$alias}.id");
$q- > leftJoin("modTemplateVarResource", "tv2", "tv2.tmplvarid = 2 AND tv2.contentid = {$alias}.id");
// .........
$c->select(array(
"tv1.value as tv1_value",
"tv2.value as tv2_value",
"{$alias}.*",
));

Such request will be fulfilled faster, since the resulting table will be as many records and how many records are documents, but still the load is not small when the expense record will go to the tens and hundreds of thousands, and the number of TV-NIS will be over a dozen (because each TV-shka — plus a single joining table).

Definitely the best option in this case is the storage of the TV values in the system of the site_content table, that is, each value is stored in a separate column of this table.

If anyone thinks that this is another lesson beaten the subject of CRC, this is not true. Traditionally we were taught to expand the available classes and their there to add the needed column (if not own the table to register). But this path is not optimal. The main problem here is that we are expanding once the class, but not change it of himself. Extensions apply only expanding (and not expanding) class, as well as those extending classes that you will extend our class. Confusing, but it is difficult simply to say. Explain. We have a base class modResource. It expand classes modDocument, modWebLink, modSimLink, etc they All inherit from modResource mapou table. If we extend our class modResource class, our class will be a new column which we would add, but they will not be in class modDocument, as it does not expand our class. In order for information about the new columns appeared in all extending modResource classes, this information needs to be in the modResource class. But how to do it themselves without touching the system files?.. In fact, part of what I wrote over two years ago (see the article moved here), but only now realized it's in combat mode. Do:

1. Create a new component that will be loaded as extensionPackage (in detail wrote about it here).

2. Create a new column to the site_content table via phpMyAdmin or something.

3. Using CMPGenerator-and generated a separate package with Mapai the site_content table. In this Marie is and a description of your new columns and tables.
4. Prescribe in your package file metadata.mysql.php data of your columns and indexes (an example of such a file you can see in our Assembly ShopModxBox).
For example I have this file looks like this
<?php
$custom_fields = array(
"modResource" => array(
"fields" => array(
"article_type" => array(
"defaultValue" => NULL,
"metaData" => array (
'dbtype' = > 'tinyint',
'precision' = > '3',
'attributes' => 'unsigned',
'phptype' = > 'integer',
'null' = > true,
'index' => 'index'
),
),
"image" => array(
"defaultValue" => NULL,
"metaData" => array (
'dbtype' => 'varchar',
'precision' = > '512',
'phptype' = > 'string',
'null' = > false,
),
),
),

"indexes" => array(
'article_type' => 
array (
'alias' = > 'article_type',
'primary' => false,
'unique' => false,
'type' => 'BTREE',
'columns' => 
array (
'article_type' => 
array (
'length' => ",
'collation' => 'A',
'null' = > true,
),
),
),
),
),
);

foreach($custom_fields as $class = > $class_data){
foreach($class_data['fields'] as $field => $data){
$this->map[$class]['fields'][$field] = $data['defaultValue'];
$this->map[$class]['fieldMeta'][$field] = $data['metaData'];
}

if(!empty($class_data['indexes'])){
foreach($class_data['indexes'] as $index => $data){
$this->map[$class]['indexes'][$index] = $data;
}
}
}

Carefully it carefully. It adds information about the two columns and the same index in the site_content table.

Let's make sure that the columns were indeed added. Execute in the console this code:
the
$o = $modx- > newObject('modDocument');
print_r($o->toArray());


See here is the result:
Array
(
[id] => 
[type] => document
[contentType] = > text/html
[pagetitle] => 
[longtitle] => 
// There is still a lot of speakers listed
// and at the end of our two speakers
[article_type] => 
[image] => 
)


Now we can work with a system table with our custom fields. For example, you can write:
$resource = $modx- > getObject('modResource', $id);
$resource->article_type = $article_type;
$resource->save();

In the table for this document will be written to our value.

Creating your own columns and indexes on a clean MODX.


It is clear that with this approach we face the problem of migration from this custom MODX website on the net, because there are no tables in our custom fields and indeksov. But really, it's as if no problem at all. The fact is that as we generate the map from tables and the tables columns and indexes can we create a map-descriptions of the classes. Create a column or index is very simple:
the
// Get the Manager of the database
$manager = $modx->getManager();
// Create column 
$manager->addField($className, $fieldName);
// Create the index
$manager- > addIndex($className, $fieldName);

It does not have any data columns and indexes to specify except their names. These data xPDO get out our maps and uses when you create a described column or index.

If you collect a component in a normal installation package, then there may be direct to register the script so that the installation package was created in your custom table columns and indexes.

Rendering of your custom data in the TV fields when editing documents.


As I said above, the convenience of TV-shack is that they created different controls (text fields, dropdown list, checkboxes, radioboxes, etc.). Plus, in the native form editor it is possible to delimit the rights to certain TV-field, to whom Russia could not see/edit private fields. the In fact, if you really want to, but all private fields will not be an eyesore who does not popodya. And just these mechanisms and I would not like to lose, because otherwise you'll have to clubbing your own interfaces to manage these data, and this is a very labor-intensive. I would like still to edit the data to use the native resource editor. The perfect mechanism here, but more or less viable option I worked. The idea is that at the level of the plugin at the moment of rendering the edit form of the document to frame TV-field with your custom value and when you save the document to intercept data TV-shki and save this data into our custom fields. Sorry, not here to jump on as it should be (simply because the API does not allow) so that we can not influence passed to the processor of the document data, which data Tuski will still be recorded in the table, Tushek, but it's not a problem — just after a document is saved and the machine will clean the plaque and all. Here is an example of the plugin, triggered by three things (1. rendering the edit form of the document with the substitution of TV-fields and custom data, 2. obtaining data and modifying the document object before saving it, 3. cleaning of unnecessary data).
View code
<?php

OnBeforeDocFormSave
OnDocFormSave
OnResourceTVFormRender
*/

switch($modx->event->name){



/*
Rendering, TSEK
*/
case 'OnResourceTVFormRender':

$categories = &$scriptProperties['categories'];

foreach($categories as $c_id => & $category){

foreach($category['tvs'] as &$tv){

/*
The rendering of tags
*/
if($tv- > id == '1'){
if($document = $modx- > getObject('modResource', $resource)){
$q = $modx- > newQuery('modResourceTag');
$q- > select(array(
"GROUP_CONCAT(distinct tag_id) as tags",
));
$q- > where(array(
"resource_id is" => $document->id,
));
$tags = $modx->getValue($q->prepare());
$value = str_replace(",", "||", $tags);
$tv- > value = $value;
$tv->relativeValue = $value;
$inputForm = $tv- > renderInput($document, array('value'=> $tv- > value));
$tv- > set('formElement',$inputForm);
}
}

/*
Rendering images
*/
else if($tv- > id == 2){
if($document = $modx- > getObject('modResource', $resource)){
$tv- > value = $document->image;
$tv->relativeValue = $document->image;
$inputForm = $tv- > renderInput($document, array('value'=> $tv- > value));
$tv- > set('formElement',$inputForm);
}
}
/*
Rendering status
*/
else if($tv- > id == 12){
if($document = $modx- > getObject('modResource', $resource)){
$tv- > value = $document->article_status;
$tv->relativeValue = $document->article_status;
$inputForm = $tv- > renderInput($document, array('value'=> $tv- > value));
$tv- > set('formElement',$inputForm);
}
}
}
}

break;


// Before saving the document
case 'OnBeforeDocFormSave':
$resource = &$scriptProperties['resource'];
/*
Tags.
Before saving the document we get all the old 
tags and set them to active = 0.
All current tags will be set to active = 1.
After you save the document in the OnDocFormSave event we will remove all inactive tags
*/ 

if(isset($resource->tv1)){
$tags = array();
foreach((array)$resource->Tags as $tag){
$tag->active = 0;
$tags[$tag- > tag_id] = $tag;
}

// $tags = array(); 

if(!empty($resource->tv1)){
foreach((array)$resource->tv1 as $tv_value){
if($tv_value){
if(!empty($tags[$tv_value])){
$tags[$tv_value]->active = 1;
}
else{
$tags[$tv_value] = $modx- > newObject('modResourceTag', array(
"tag_id" => $tv_value,
));
}
}
}
}

$resource- > Tags = $tags;

$tags_ids = array();
foreach($resource->Tags as $tag){
if($tag->active){
$tags_ids[] = $tag- > tag_id;
}
}

$resource->tags = ($tags_ids ? implode(",", $tags_ids) : NULL);
}


/*
The processed image
*/
if(isset($resource->tv2)){
$resource->image = $resource->tv2;
}


/*
Processed statuses
*/
if(isset($resource->tv12)){
$resource->article_status = $resource->tv12;
}

break;


/*
Save the document
*/
case 'OnDocFormSave':
$resource =&$scriptProperties['resource'];
/*
Remove all inactive tags
*/
$modx- > removeCollection('modResourceTag',array(
'active' => 0,
'resource_id is' => $resource->id,
));

/*
Remove the TV pictures because they are saved in the system table
Remove TV-statuses as they are stored in the system table
*/
$modx- > removeCollection('modTemplateVarResource',array(
'tmplvarid:in' => array(
1, // Tags
2, // Picture
12, // Status
),
'contentid' => $resource->id,
));

break; 
}


Thanks to this plug-in custom data are rendered in the edit form of the document and processed when it is saved.

the

Result


Of the 225+ thousand records in the table additional fields were only 78. Of course, not all Tuski will fugacities in the system table (but only those that are used for searching and sorting), and some data will certainly be in the table, per-field, but the load is still seriously declined, and the queries become simpler.

the UPD: More versatile a plugin for rendering and processing Twse.
Article based on information from habrahabr.ru

Популярные сообщения из этого блога

Approval of WSUS updates: import, export, copy

The Hilbert curve vs. Z-order

Kaspersky Security Center — the fight for automation