Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MODORDSTOR-435] Introduce new batch update pieces endpoint #463

Merged
merged 8 commits into from
Dec 23, 2024
23 changes: 23 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,29 @@ In order to support search and filtering for the APIs /orders-storage/purchase-o

## Additional information

### Kafka domain event pattern
The pattern means that every time when a domain entity is created/updated/deleted a message is posted to kafka topic.
Currently, domain events are supported for orders, order lines and pieces The events are posted into the following topics:

- `ACQ_ORDER_CHANGED` - for orders
- `ACQ_ORDER_LINE_CHANGED` - for order lines
- `ACQ_PIECE_CHANGED` - for pieces

The event payload has the following structure:
```json5
{
"id": "12bb13f6-d0fa-41b5-b0ad-d6561975121b",
"action": "CREATED|UPDATED|DELETED",
"userId": "1d4f3f6-d0fa-41b5-b0ad-d6561975121b",
"eventDate": "2024-11-14T10:00:00.000+0000",
"actionDate": "2024-11-14T10:00:00.000+0000",
"entitySnapshot": { } // entity being either: order, orderLine, piece
}
```

Default value for all partitions is 1.
Kafka partition key for all the events is entity id.

### Issue tracker

See project [MODORDSTOR](https://issues.folio.org/browse/MODORDSTOR)
Expand Down
13 changes: 12 additions & 1 deletion descriptors/ModuleDescriptor-template.json
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,11 @@
"methods": ["DELETE"],
"pathPattern": "/orders-storage/pieces/{id}",
"permissionsRequired": ["orders-storage.pieces.item.delete"]
},
{
"methods": ["PUT"],
"pathPattern": "/orders-storage/pieces-batch",
"permissionsRequired": ["orders-storage.pieces-batch.collection.put"]
}
]
},
Expand Down Expand Up @@ -780,6 +785,11 @@
"displayName" : "orders-storage.pieces-item delete",
"description" : "Delete a piece"
},
{
"permissionName" : "orders-storage.pieces-batch.collection.put",
"displayName" : "Pieces batch update",
"description" : "Update a set of Pieces in a batch"
},
{
"permissionName" : "orders-storage.pieces.all",
"displayName" : "All orders-storage pieces perms",
Expand All @@ -789,7 +799,8 @@
"orders-storage.pieces.item.post",
"orders-storage.pieces.item.get",
"orders-storage.pieces.item.put",
"orders-storage.pieces.item.delete"
"orders-storage.pieces.item.delete",
"orders-storage.pieces-batch.collection.put"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion ramls/acq-models
Submodule acq-models updated 34 files
+17 −0 mod-finance/examples/fund_update_log.sample
+22 −0 mod-finance/examples/fund_update_log_collection.sample
+53 −0 mod-finance/examples/fy_finance_data_collection_get.sample
+63 −0 mod-finance/examples/fy_finance_data_collection_put.sample
+49 −0 mod-finance/schemas/fund_update_log.json
+24 −0 mod-finance/schemas/fund_update_log_collection.json
+126 −0 mod-finance/schemas/fy_finance_data.json
+25 −0 mod-finance/schemas/fy_finance_data_collection.json
+1 −1 mod-invoice-storage/examples/document.sample
+83 −0 mod-invoice-storage/examples/invoice_audit_event.sample
+92 −0 mod-invoice-storage/examples/invoice_line_audit_event.sample
+80 −0 mod-invoice-storage/examples/outbox_event_log.sample
+9 −0 mod-invoice-storage/schemas/event_action.json
+9 −0 mod-invoice-storage/schemas/event_topic.json
+1 −0 mod-invoice-storage/schemas/invoice.json
+39 −0 mod-invoice-storage/schemas/invoice_audit_event.json
+1 −0 mod-invoice-storage/schemas/invoice_line.json
+43 −0 mod-invoice-storage/schemas/invoice_line_audit_event.json
+34 −0 mod-invoice-storage/schemas/outbox_event_log.json
+1 −10 mod-orders-storage/schemas/piece.json
+15 −0 mod-orders-storage/schemas/receiving_status.json
+9 −0 mod-orders/examples/claimingCollection.sample
+16 −0 mod-orders/examples/claimingResults.sample
+8 −0 mod-orders/examples/pieceStatusBatchCollection.sample
+34 −0 mod-orders/schemas/claimingCollection.json
+44 −0 mod-orders/schemas/claimingResults.json
+29 −0 mod-orders/schemas/pieceStatusBatchCollection.json
+200 −0 mod-orgs/examples/organization_audit_event.sample
+197 −0 mod-orgs/examples/outbox_event_log.sample
+9 −0 mod-orgs/schemas/event_action.json
+8 −0 mod-orgs/schemas/event_topic.json
+1 −0 mod-orgs/schemas/organization.json
+39 −0 mod-orgs/schemas/organization_audit_event.json
+33 −0 mod-orgs/schemas/outbox_event_log.json
45 changes: 45 additions & 0 deletions ramls/pieces-batch.raml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#%RAML 1.0
title: "mod-orders"
baseUri: http://github.com/folio-org/mod-orders-storage
version: v1.0

documentation:
- title: Pieces batch API
content: This module implements the Pieces batch processing interface. This API is intended for internal use only.

types:
pieces-collection: !include acq-models/mod-orders-storage/schemas/piece_collection.json
UUID:
type: string
pattern: ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[1-5][0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}$

resourceTypes:
collection: !include raml-util/rtypes/collection.raml
collection-item: !include raml-util/rtypes/item-collection.raml
traits:


/orders-storage/pieces-batch:
displayName: Process list of Pieces in a batch
description: Process list of Pieces in a batch APIs
put:
description: "Update the list of Pieces in a batch"
body:
application/json:
type: pieces-collection
example:
strict: false
value: !include acq-models/mod-orders-storage/examples/piece_collection.sample
responses:
204:
description: "Collection successfully updated"
400:
description: "Bad request, e.g. malformed request body or query parameter. Details of the error (e.g. name of the parameter or line/character number with malformed data) provided in the response."
body:
text/plain:
example: "unable to update <<resourcePathName|!singularize>> -- malformed JSON at 13:4"
500:
description: "Internal server error, e.g. due to misconfiguration"
body:
text/plain:
example: "internal server error, contact administrator"
4 changes: 2 additions & 2 deletions src/main/java/org/folio/dao/PieceClaimingRepository.java
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
import java.time.ZoneId;

import static org.folio.dao.audit.AuditOutboxEventsLogRepository.OUTBOX_TABLE_NAME;
import static org.folio.rest.impl.PiecesAPI.PIECES_TABLE;
import static org.folio.rest.impl.TitlesAPI.TITLES_TABLE;
import static org.folio.models.TableNames.PIECES_TABLE;
import static org.folio.models.TableNames.TITLES_TABLE;
import static org.folio.rest.persist.PostgresClient.convertToPsqlStandard;

public class PieceClaimingRepository {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ private Future<Void> processPoLinesUpdate(String holdingId, String permanentLoca
}

private Future<Void> processPiecesUpdate(String holdingId, String tenantIdFromEvent,
String centralTenantId, Map<String, String> headers,Conn conn) {
String centralTenantId, Map<String, String> headers, Conn conn) {
return pieceService.getPiecesByHoldingId(holdingId, conn)
.compose(pieces -> updatePieces(pieces, holdingId, tenantIdFromEvent, centralTenantId, conn))
.compose(pieces -> auditOutboxService.savePiecesOutboxLog(conn, pieces, PieceAuditEvent.Action.EDIT, headers))
Expand Down
65 changes: 32 additions & 33 deletions src/main/java/org/folio/rest/impl/PiecesAPI.java
Original file line number Diff line number Diff line change
@@ -1,47 +1,47 @@
package org.folio.rest.impl;

import java.util.Date;
import static org.folio.models.TableNames.PIECES_TABLE;
import static org.folio.util.HelperUtils.extractEntityFields;
import static org.folio.util.MetadataUtils.populateMetadata;

import java.util.Map;
import java.util.UUID;

import javax.ws.rs.core.Response;

import org.apache.commons.lang3.StringUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.dao.PostgresClientFactory;
import org.folio.event.service.AuditOutboxService;
import org.folio.models.TableNames;
import org.folio.rest.annotations.Validate;
import org.folio.rest.core.BaseApi;
import org.folio.rest.jaxrs.model.Piece;
import org.folio.rest.jaxrs.model.PieceAuditEvent;
import org.folio.rest.jaxrs.model.PieceCollection;
import org.folio.rest.jaxrs.model.PiecesCollection;
import org.folio.rest.jaxrs.resource.OrdersStoragePieces;
import org.folio.rest.persist.Conn;
import org.folio.rest.jaxrs.resource.OrdersStoragePiecesBatch;
import org.folio.rest.persist.HelperUtils;
import org.folio.rest.persist.PgUtil;
import org.folio.rest.persist.PostgresClient;
import org.folio.rest.tools.utils.TenantTool;
import org.folio.services.piece.PieceService;
import org.folio.spring.SpringContextUtil;
import org.folio.util.DbUtils;
import org.springframework.beans.factory.annotation.Autowired;

import io.vertx.core.AsyncResult;
import io.vertx.core.Context;
import io.vertx.core.Future;
import io.vertx.core.Handler;
import io.vertx.core.Vertx;
import io.vertx.core.json.JsonObject;
import io.vertx.sqlclient.Row;
import io.vertx.sqlclient.RowSet;

public class PiecesAPI extends BaseApi implements OrdersStoragePieces {
private static final Logger log = LogManager.getLogger();
public class PiecesAPI extends BaseApi implements OrdersStoragePieces, OrdersStoragePiecesBatch {

public static final String PIECES_TABLE = "pieces";
private static final Logger log = LogManager.getLogger();

private final PostgresClient pgClient;

@Autowired
private PieceService pieceService;
@Autowired
private AuditOutboxService auditOutboxService;
@Autowired
Expand All @@ -66,7 +66,7 @@ public void getOrdersStoragePieces(String query, String totalRecords, int offset
@Validate
public void postOrdersStoragePieces(Piece entity, Map<String, String> okapiHeaders,
Handler<AsyncResult<Response>> asyncResultHandler, Context vertxContext) {
pgClient.withTrans(conn -> createPiece(conn, entity)
pgClient.withTrans(conn -> pieceService.createPiece(conn, entity)
.compose(ignore -> auditOutboxService.savePieceOutboxLog(conn, entity, PieceAuditEvent.Action.CREATE, okapiHeaders)))
.onComplete(ar -> {
if (ar.succeeded()) {
Expand All @@ -80,18 +80,6 @@ public void postOrdersStoragePieces(Piece entity, Map<String, String> okapiHeade
});
}

private Future<String> createPiece(Conn conn, Piece piece) {
piece.setStatusUpdatedDate(new Date());
if (StringUtils.isBlank(piece.getId())) {
piece.setId(UUID.randomUUID().toString());
}
log.debug("Creating new piece with id={}", piece.getId());

return conn.save(TableNames.PIECES_TABLE, piece.getId(), piece)
.onSuccess(rowSet -> log.info("Piece successfully created, id={}", piece.getId()))
.onFailure(e -> log.error("Create piece failed, id={}", piece.getId(), e));
}

@Override
@Validate
public void getOrdersStoragePiecesById(String id, Map<String, String> okapiHeaders,
Expand All @@ -110,7 +98,7 @@ public void deleteOrdersStoragePiecesById(String id, Map<String, String> okapiHe
@Validate
public void putOrdersStoragePiecesById(String id, Piece entity, Map<String, String> okapiHeaders,
Handler<AsyncResult<Response>> asyncResultHandler, Context vertxContext) {
pgClient.withTrans(conn -> updatePiece(conn, entity, id)
pgClient.withTrans(conn -> pieceService.updatePiece(conn, entity, id)
.compose(ignore -> auditOutboxService.savePieceOutboxLog(conn, entity, PieceAuditEvent.Action.EDIT, okapiHeaders)))
.onComplete(ar -> {
if (ar.succeeded()) {
Expand All @@ -124,13 +112,24 @@ public void putOrdersStoragePiecesById(String id, Piece entity, Map<String, Stri
});
}

private Future<RowSet<Row>> updatePiece(Conn conn, Piece piece, String id) {
log.debug("Updating piece with id={}", id);

return conn.update(TableNames.PIECES_TABLE, piece, id)
.compose(DbUtils::failOnNoUpdateOrDelete)
.onSuccess(rowSet -> log.info("Piece successfully updated, id={}", id))
.onFailure(e -> log.error("Update piece failed, id={}", id, e));
@Override
public void putOrdersStoragePiecesBatch(PiecesCollection piecesCollection, Map<String, String> okapiHeaders,
Handler<AsyncResult<Response>> asyncResultHandler, Context vertxContext) {
var piecesIds = extractEntityFields(piecesCollection.getPieces(), Piece::getId);
var pieces = piecesCollection.getPieces().stream().map(piece -> populateMetadata(piece::getMetadata, piece::withMetadata, okapiHeaders)).toList();
log.info("putOrdersStoragePiecesBatch:: Batch updating {} pieces: {}", piecesIds.size(), piecesIds);
pgClient.withTrans(conn -> pieceService.updatePieces(pieces, conn, TenantTool.tenantId(okapiHeaders))
.compose(v -> auditOutboxService.savePiecesOutboxLog(conn, pieces, PieceAuditEvent.Action.EDIT, okapiHeaders)))
.onComplete(ar -> {
if (ar.succeeded()) {
log.info("putOrdersStoragePiecesBatch:: Successfully updated pieces: {}", piecesIds);
auditOutboxService.processOutboxEventLogs(okapiHeaders);
asyncResultHandler.handle(buildNoContentResponse());
} else {
log.error("putOrdersStoragePiecesBatch:: Failed to update pieces: {}", piecesIds, ar.cause());
asyncResultHandler.handle(buildErrorResponse(ar.cause()));
}
});
}

@Override
Expand Down
14 changes: 5 additions & 9 deletions src/main/java/org/folio/rest/impl/PoLineBatchAPI.java
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package org.folio.rest.impl;

import java.util.List;
import static org.folio.util.HelperUtils.extractEntityFields;

import java.util.Map;
import java.util.stream.Collectors;

import javax.ws.rs.core.Response;

Expand Down Expand Up @@ -50,11 +50,12 @@ public void putOrdersStoragePoLinesBatch(PoLineCollection poLineCollection, Map<
.compose(cf ->
poLinesBatchService.poLinesBatchUpdate(poLineCollection.getPoLines(), pgClient, okapiHeaders))
.onComplete(ar -> {
var poLineIds = extractEntityFields(poLineCollection.getPoLines(), PoLine::getId);
if (ar.failed()) {
log.error("putOrdersStoragePoLinesBatch:: failed, PO line ids: {} ", getPoLineIdsForLogMessage(poLineCollection.getPoLines()), ar.cause());
log.error("putOrdersStoragePoLinesBatch:: Failed to update PoLines: {}", poLineIds, ar.cause());
asyncResultHandler.handle(buildErrorResponse(ar.cause()));
} else {
log.info("putOrdersStoragePoLinesBatch:: completed, PO line ids: {} ", getPoLineIdsForLogMessage(poLineCollection.getPoLines()));
log.info("putOrdersStoragePoLinesBatch:: Successfully updated PoLines: {}", poLineIds);
auditOutboxService.processOutboxEventLogs(okapiHeaders);
asyncResultHandler.handle(buildNoContentResponse());
}
Expand All @@ -66,9 +67,4 @@ protected String getEndpoint(Object entity) {
return HelperUtils.getEndpoint(OrdersStoragePoLinesBatch.class);
}

private String getPoLineIdsForLogMessage(List<PoLine> polines) {
return polines.stream()
.map(PoLine::getId)
.collect(Collectors.joining(", "));
}
}
6 changes: 4 additions & 2 deletions src/main/java/org/folio/rest/impl/TitlesAPI.java
Original file line number Diff line number Diff line change
@@ -1,11 +1,14 @@
package org.folio.rest.impl;

import static org.folio.models.TableNames.TITLES_TABLE;

import java.util.Map;

import javax.ws.rs.core.Response;

import io.vertx.core.Vertx;
import io.vertx.core.json.JsonObject;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.folio.dao.PostgresClientFactory;
Expand All @@ -20,6 +23,7 @@
import io.vertx.core.AsyncResult;
import io.vertx.core.Context;
import io.vertx.core.Handler;

import org.folio.rest.persist.PostgresClient;
import org.folio.services.title.TitleService;
import org.folio.spring.SpringContextUtil;
Expand All @@ -29,8 +33,6 @@ public class TitlesAPI extends BaseApi implements OrdersStorageTitles {

private static final Logger log = LogManager.getLogger();

public static final String TITLES_TABLE = "titles";

private final PostgresClient pgClient;

@Autowired
Expand Down
7 changes: 3 additions & 4 deletions src/main/java/org/folio/services/lines/PoLinesService.java
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,12 @@

import static java.util.stream.Collectors.toList;
import static org.folio.dao.RepositoryConstants.MAX_IDS_FOR_GET_RQ_15;
import static org.folio.models.TableNames.PIECES_TABLE;
import static org.folio.models.TableNames.PO_LINE_TABLE;
import static org.folio.models.TableNames.PURCHASE_ORDER_TABLE;
import static org.folio.models.TableNames.TITLES_TABLE;
import static org.folio.rest.core.ResponseUtil.handleFailure;
import static org.folio.rest.core.ResponseUtil.httpHandleFailure;
import static org.folio.rest.impl.TitlesAPI.TITLES_TABLE;
import static org.folio.rest.persist.HelperUtils.ID_FIELD_NAME;
import static org.folio.rest.persist.HelperUtils.JSONB;
import static org.folio.rest.persist.HelperUtils.getCriteriaByFieldNameAndValueNotJsonb;
Expand All @@ -31,7 +32,6 @@
import org.folio.models.CriterionBuilder;
import org.folio.okapi.common.GenericCompositeFuture;
import org.folio.rest.core.models.RequestContext;
import org.folio.rest.impl.PiecesAPI;
import org.folio.rest.jaxrs.model.OrderLineAuditEvent;
import org.folio.rest.jaxrs.model.PoLine;
import org.folio.rest.jaxrs.model.PurchaseOrder;
Expand All @@ -57,7 +57,6 @@
public class PoLinesService {

private static final String PO_LINE_ID = "poLineId";
private static final String LOCATIONS_HOLDING_ID_FIELD = "location.holdingId";

private final PoLinesDAO poLinesDAO;
private final AuditOutboxService auditOutboxService;
Expand Down Expand Up @@ -443,7 +442,7 @@ private Future<Tx<String>> deletePiecesByPOLineId(Tx<String> tx, DBClient client
Promise<Tx<String>> promise = Promise.promise();
Criterion criterion = getCriterionByFieldNameAndValue(PO_LINE_ID, tx.getEntity());

client.getPgClient().delete(tx.getConnection(), PiecesAPI.PIECES_TABLE, criterion, ar -> {
client.getPgClient().delete(tx.getConnection(), PIECES_TABLE, criterion, ar -> {
if (ar.failed()) {
log.error("Delete Pieces failed, criterion={}", criterion, ar.cause());
httpHandleFailure(promise, ar);
Expand Down
Loading
Loading