Skip over navigation
Documentation
You are currently viewing documentation for a previously released version of OroCRM. See the latest long-term support version.

Import/Export Entity Data

The OroImportExportBundle is intended to import entities into or export them out of OroPlatform. The bundle uses the OroBatchBundle to organize the execution of import/export operations. Any import/export operation is a job.

A job itself is abstract. It doesn’t know any specific details of what is happening during its execution. A job consists of steps which can be configured to run in an execution context and are executed by the client.

Each step aggregates three crucial components which are not aware of each other:

  • Reader
  • Processor
  • Writer

A step uses the reader to read data from the source. After the reader has run, the data is passed to the processor which can modify the data before it is forwarded to the writer. Finally, the writer saves data to its final destination.

See also

You can take a look at the code in the OroCRM ContactBundle for a real world example. It extends base classes from the ImportExportBundle (see the classes in the ImportExport namespace) to implement contact specific behavior. The configuration is located in the Resources/config/importexport.yml file.

Import/Export Configuration

Import is a basic operation for any entity. The import operation is one step. See the following example configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# Oro/Bundle/ImportExportBundle/Resources/config/batch_jobs.yml
connector:
    name: oro_importexport
    jobs:
        entity_import_from_csv:
            title: "Entity Import from CSV"
            type: import
            steps:
                import:
                    title:     import
                    class:     Oro\Bundle\BatchBundle\Step\ItemStep
                    services:
                        reader:    oro_importexport.reader.csv
                        processor: oro_importexport.processor.import_delegate
                        writer:    oro_importexport.writer.entity
                    parameters: ~

The import algorithm being performed is (in pseudocode):

Process job:
  - Process step 1:
    - loop
      - read item from source
      - if source is empty exit from loop
      - process item
      - save processed item to array of entities
    - end loop
    - save array of prepared entities to DB

The OroBatchBundle provides the ItemStep class that executes each step of a job. In its doExecute() method, it creates a StepExecutor instance, passes a reader, a processor and a writer to it and executes it in the StepExecutor through the execute() method. After this step is done, all imported items are written to the destination.

The Import Process in Detail

For example, here is what happens in detail when you import contact data from a CSV file:

  1. The CsvFileReader reads one row from the CSV file in its read() method and transforms it to an array representing the columns of that row.
  2. The data being read is then passed to the process() method of the ImportProcessor class which converts the item to a complex array using the convertToImportFormat() method of the ConfigurableTableDataConverter data converter class.
  3. The processor deserializes the item from the converted array using the Serializer class.
  4. Optionally, the deserialized object can then be modified by the ConfigurableAddOrReplaceStrategy class.
  5. Finally, the processed entity is returned by the processor and then passed to the EntityWriter class. This writer stores the data when its write() method is executed.

The Export Process in Detail

The export process is essentially the import process in reverse, except that it doesn’t use a strategy:

  1. First, the EntityReader class reads an object.
  2. Then, the ExportProcessor class serializes and converts the object into an associative array with property names as keys and the property values as values of the array.
  3. The Serializer class normalizes each field and converts objects to complex arrays.
  4. A data converter converts the associative array into a dimensional array.
  5. Finally, all array entries are written to a CSV file by the CsvFileWriter class.

The export algorithm being performed is (in pseudocode):

Process job:
  - Process step 1:
    - loop
      - read entity from DB
      - if source is empty exit from loop
      - process entity
      - save plain array to array of items for save
    - end loop
    - save array of prepared items to DB

Serializer & Normalizer

One very important concept to know is how we normalize/denormalize relations between entities and other complex data.

The Serializer class extends the standard serializer of the Symfony Serializer component and has its own normalizers and denormalizers. Each entity that you want to export/import should be supported by the serializer. This means that you should add normalizers and denormalizers that will take care of converting your entity to the array/scalar representation (normalization during serialization) and vice versa, converting arrays to the entity object representation (denormalization during deserialization).

The ConfigurableEntityNormalizer

The platform converts entities to complex arrays for which it uses the normalize() method from the ConfigurableEntityNormalizer class. This method uses the field helper to process the fields:

  • If the field is excluded by the configuration, it will be skipped during normalization.
  • If the field is an object, another entity or a collection, the normalize() method for this type of object will be called.
  • If the field is a scalar value, the field will be added with this value to the array of normalized values.

You can configure your fields in the UI under System / Entities / Entity Management. Alternatively, you can describe the field configuration in your entity directly using annotations:

1
2
3
4
5
6
7
8
9
 /**
  * @ConfigField(
  *      defaultValues={
  *          "importexport"={
  *              "order"=200,
  *              "full"=true
  *          }
  *      }
  */

You can use the following options:

OptionDescription
identityIf true, the field is part of the key used to identify an instance of the entity. It is required to configure the object identity to support imports.
orderThe position of the property in the export.
excludedThe skip is field during export if excluded is true.
fullIf false, the normalize() method returns only identity fields of associated entities during exports. If true, all fields of the related entity are exported. Fields with Excluded option are skipped.This option cannot be configured in the user interface, but can only be set using annotations.

Importing one-to-many Relations

If you want to import one-to-many relations from a CSV file, you should use the following field name rules for the header columns: “RelationFieldName NumberOfInstance FieldName” where these strings have the following meaning:

  • RelationFieldName (string): entity relation name;
  • NumberOfInstance (integer): for example 1;
  • FieldName (string): The name of the referenced field name.

For example:

"Addresses 1 First name"

FieldName may be a field label or a column name from a configuration field. You can look it into UI System/Entities/Entity Management. You should import all identity fields for the related entity.

Importing many-to-one Relations

If you want to import many-to-one relations, you should use the following rule: “RelationFieldName IdentityFieldName” where these placeholders have the following meaning:

  • RelationFieldName (string): entity relation name;
  • IdentityFieldName (string): identity field of the related entity. If the related entity has two or more identity fields, you should import all identity fields of the related entity.

For example:

"Owner Username"

Extension of Import/Export Contacts

Adding a new Provider to Support different Formats

To write your own provider for import operations, you should create a class that extends the AbstractReader class. To support custom export formats, you just need to create a new class that implements the ItemWriterInterface from the Akeneo BatchBundle. The new classes must be declared as services:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
parameters:
    oro_importexport.reader.csv.class: Acme\DemoBundle\ImportExport\Reader\ExcelFileReader
    oro_importexport.writer.csv.class: Acme\DemoBundle\ImportExport\Writer\ExcelFileWriter

services:
    oro_importexport.reader.csv:
        class: "%oro_importexport.reader.csv.class%"

    oro_importexport.writer.csv:
        class: "%oro_importexport.writer.csv.class%"

Changing the Strategy

OroPlatform provides a basic “add or substitute” import strategy. The basic process is implemented in the ConfigurableAddOrReplaceStrategy class. To create your own import strategy you can extend this class and override the following methods:

See also

You can see an example of an adapted strategy in the ContactAddOrReplaceStrategy from the OroCRM ContactBundle.

Learn more

Read the ImportExportBundle documentation for more information.

Browse maintained versions:2.62.32.01.12
Forums
Back to top