Job Configuration - Overview
This page provides an overview about import and export jobs in DJUST via SFTP or API Connector.
✅ Introduction
Data Synchronization
DJUST supports two types of data synchronization jobs:
- Import Jobs – Bring external data into DJUST.
- Export Jobs – Send data from DJUST to external systems.
Jobs can be configured to work with either:
- FTP
- API Connector
REST APIsThe API connector works only with REST architecture APIs and does not work with SOAP or GraphQL APIs.
Hybrid method available for API ConnectorYou can configure an API Connector to retrieve a .csv file by making a GET request to a URL returned by a previous API call.
Execution Modes
Jobs can be triggered in several ways:
- Manually: Triggered by an admin user from the DJUST back office (Data Hub section).
- Scheduled: Automatically run at regular intervals using cron-based scheduling (e.g., every 30 min, daily). All scheduled times are expressed in UTC.
- Event-driven (for exports): Triggered when an event occurs — typically a status change on an object like an Order.
Simultaneous execution of a job
Simultaneous execution of a job (running multiple import processes at the same time for a given job) is not allowed. This prevents data conflicts that can occur when executions modify the same records or rely on each other.
When a new execution is triggered while another is still running, the system behavior depends on the job configuration:
RUN_IMMEDIATELY: the new execution is queued with the statusJOB_PENDINGand will start as soon as the previous one completes.WAIT_FOR_NEXT: the new execution is skipped with the statusJOB_SKIPPED, and only the next scheduled execution will be considered.
This setting can be configured individually for each import job.
Duplicating a Job
You can create a copy of an existing job to quickly set up a similar import or export with the same configuration. This is useful when you need multiple jobs with similar mappings but different source files or schedules.
What is copied:
- All mapping configuration and settings from the source job.
What is not copied:
- The scheduler (cron) configuration — the new job's scheduler is reset and disabled.
- The new job is created in an inactive state.
Naming:
- The duplicated job is named
{Source job name} - COPY. If that name already exists, a numeric suffix is added (e.g.,- COPY (1),- COPY (2)).
API endpoint:
POST /v1/mapper/jobs/{jobId}/duplicate— operationId: ADM-JOB-100- Access:
dj-client: OPERATOR - Response:
200 OKwith the new job'sid.
{
"id": "a9b2c3d4-e5f6-7890-abcd-ef1234567890"
}Stopping a Job Execution
If a job execution becomes stuck or needs to be interrupted, operators can manually stop it directly from the Back-Office.
A "Stop execution" button is available on the job detail page for each execution currently in progress. The button is visible only when the execution is in one of the following stoppable statuses:
JOB_INITIALIZINGJOB_STARTEDINTEGRATION_WAITINGINTEGRATION_STARTEDJOB_PENDING
Warning: Stopping an execution is irreversible. A confirmation dialog is displayed before the action is performed.
Once the execution is stopped, the execution list refreshes automatically. Completed or failed executions do not display the stop button.
Requirements:
- Permission:
MAPPER_WRITE
API endpoints:
- stopJobExecutionByJobStatusId —
POST /v1/mapper/status/{jobStatusId}/stop— Stops a specific execution by its job status ID. - stopLastJobExecution — Stops the last execution of a given job.
Execution ID
Each job execution (import and export) is assigned a unique execution ID. This identifier is displayed in the Back-Office:
- In the execution history list, via a dedicated column.
- In the execution detail view, below the execution date.
A Copy button allows you to copy the ID in one click to share it with DJUST support when reporting an incident.
Tip: When reporting an issue on a specific job execution, always include the execution ID. This eliminates ambiguity when multiple executions occur on the same day or at similar times.
The execution ID corresponds to the jobStatusId field (for imports) or exportIntegrationId field (for exports) returned by the API. If a job has never been executed, no ID is displayed.
Execution History — Search and Filter
The execution history for import and indexation jobs can be retrieved via API with search and filter capabilities:
- Search by ID: use the
searchquery parameter to perform a partial match (contains) on theexecutionIdfield. - Filter by status: use the
statusesquery parameter (multi-value) to filter executions by status. Unknown status values are silently ignored. - Default sort:
createdAt:desc(most recent first). - Pagination: standard DJUST paginated response.
Import / Indexation jobs:
- Endpoint:
GET /v2/jobs/{jobId}/executions— operationId:ADM-JOB-550 - Access:
dj-client: OPERATOR - Response:
200 OK
Export jobs:
- Endpoint:
GET /v1/mapper/jobout/{jobOutId}/exported-item-statuses— operationId:ADM-JOBOUT-550 - Access:
dj-client: OPERATOR - Parameters:
search(partial match onexportIntegrationId),statuses(values:SUCCESS,ERROR) - Response:
200 OK
Deprecation: The legacy endpointGET /v1/mapper/status/all/{jobId}remains functional but is deprecated. Migrate toGET /v2/jobs/{jobId}/executionsfor import/indexation jobs.
🧩 Entity-Based Configuration
Jobs are entity-specific, meaning each job is designed to handle a single type of data within DJUST.
The availability of each entity depends on the chosen method (SFTP or API Connector) as not all entities are supported across both.
Importable Entities per Method
Entity | Job objectives | SFTP | API Connector |
|---|---|---|---|
ACCOUNT | Create or update your Customer Accounts | ✅ | ✅ |
ASSORTMENT | Create or update your Assortments | ✅ | ✅ |
ATTRIBUTE | Create or update your Attributes | ✅ | ❌ |
CATALOG_VIEW | Create or update your Catalog Views | ❌ | ✅ |
CLASSIFICATION_CATEGORY | Create or update your Classifications | ✅ | ❌ |
CUSTOMER_USER | Create or update your Customer Users | ✅ | ❌ |
INCIDENT | Update your Incident status | ❌ | ✅ |
NAVIGATION_CATEGORY | Create or update your Navigations | ✅ | ❌ |
OFFER | Create or update your Offers (inventory and price) | ✅ | ✅ |
ORDER | Create or update External Orders Update your Internal Orders | ✅ | ✅ |
ORDER_UPDATE | Update your Internal Orders | ❌ | ✅ |
ORDER_STATUS | Update your Order Status | ✅ (XML only) | ✅ |
PRODUCT | Create or update your Products | ✅ | ✅ |
PRODUCT_TAG | Create or update your Product Tags | ✅ | ❌ |
RELATED_PRODUCT | Create or update your Related Products | ✅ | ❌ |
STORE | Create or update your Stores | ✅ | ❌ |
SUPPLIER | Create or update your Suppliers | ✅ | ✅ |
Exportable Entities per Method
| Entity | Job objectives | SFTP | API Connector |
|---|---|---|---|
| ORDER | Export your Orders information | ✅ | ✅ |
| INCIDENT | Export your Incidents information | ❌ | ✅ |
Automatic Export Settings
The general settings exportOrderEnabled and exportIncidentEnabled (returned by GET /v1/settings/general) are automatically managed by the platform based on export job lifecycle:
- When an export job is created (ORDER or INCIDENT), the corresponding setting is automatically activated.
- When the last export job of a given type is deleted, the corresponding setting is automatically deactivated.
These two fields are read-only via the API: they are returned by GET /v1/settings/general but silently ignored when sent in a PUT /v1/settings/general request. No action is required from integrators — existing API calls continue to work as before.
Tip: You do not need to manually manage these settings. Simply create or delete export jobs, and the platform keeps the settings in sync automatically.
🏪 Multi-Store Behavior
- By default, import jobs are cross-store
- The only exception is the NAVIGATION import via FTP, which must always be scoped to a store within the Navigation Job Configuration.
Entity Dependencies
Some entities depend on others being imported first. While the overall import sequence can be adjusted, respecting these dependencies ensures data consistency and prevents import errors.
Here is a recommended import order based on data prerequisites:
| Entity | Prerequisites | Notes |
|---|---|---|
| Attribute | — | Must be created before using in classifications or products. |
| Supplier | — | Required before Offers. |
| Account | — | Represents a B2B customer entity. |
| Store | — | Represents a store in the merchant website |
| Classification | Attribute | Needed before importing Navigation or Products. |
| Navigation | Classification | Relies on existing classification structure. |
| Product | Classification | Must be imported before Offers, Tags, and Assortments. |
| Offer | Product, Supplier | Links a product to a supplier with commercial terms. |
| Product Tag | Product | Cannot be imported before products exist. |
| Related Product | Product | Cannot be imported before products exist. |
| Assortment | Product | Assortments group products and require products to be available. |
| Customer User | Account | A user is always linked to an existing account. |
Data Trimming
During import (both CSV/SFTP and API Connector), leading and trailing whitespace characters are automatically trimmed from all field values. Internal spaces within a value are preserved.
| Input value | After trimming |
|---|---|
" my value " | "my value" |
" leading space" | "leading space" |
"trailing space " | "trailing space" |
"value with spaces" | "value with spaces" |
This applies to both import and export mapping fields.
Best Practices✔️ Only run jobs for data that has actually changed. Avoid full re-imports unless necessary, especially for large catalogs.
✔️ Importing entities in the correct order helps avoid failures due to missing references
Updated 8 days ago
