FedEx® API integration guide

« Back to documentation home

Overview

Traffic Parrot offers ready-made API mocks and sandbox environments including for the various APIs offered by FedEx®, including the FedEx® Developer Portal API Catalog.

For architecture diagrams and pricing please review Ready-made FedEx® API mocks, simulators and sandbox environments

This guide provides step-by-step instructions to help us integrate our application code with the FedEx® API.

We will use the Track API as a working example. You can use this as a template for any other API that you need to integrate with.

These are the steps we will follow to integrate and test our application code with FedEx®:

  1. Analysis
    • Specify business requirements for the integration.
    • Explore the FedEx® API documentation.
    • Identify key API prerequisites, inputs and outputs.
  2. Development
    • Register for access to the FedEx® API.
    • Manually verify API behaviour.
    • Use code samples to write API integration code.
  3. Testing
    • Manual exploratory testing.
    • Manual regression testing.
    • Automated unit testing of key business logic.
    • Automated integration testing of API usage.
    • Automated end to end acceptance testing of application.
  4. Production
    • Release checklist.
    • Monitoring and observability.
    • Change management.

Sample tracking application

Throughout this guide we will use a sample application that tracks FedEx® deliveries and has a web user interface. We provide both the sources and the executable application.

Analysis

The first stage of integration is to determine how to meet the business requirements using the features provided by the API.

Our example business is an online retailer who offers FedEx® as a shipping method for their customers.

Specify business requirements for the integration

In this example integration, the business has these key requirements:

  1. Customer support agents need to check the current delivery status of customer orders.
  2. Customers will provide their FedEx® tracking number.
  3. The current delivery status displayed must be human-readable.
  4. The time and date that the delivery status last changed must be displayed.
  5. Any error messages displayed must be human-readable.

The front end team have provided some mock-ups of the tracking UI.

The business analysts have provided a sketch of the tracking message flow.

For the purpose of this sample application, there will also be a login page to supply the API credentials.

In a real application these details would be hidden from the end user.

Explore the FedEx® API documentation

The official documentation contains API Catalog pages which we can explore to understand how to meet our business requirements using the APIs.

There is also an official API Reference page with further information on data and special values used in the APIs.

API Authorization

We can see that API Authorization uses OAuth 2.0 bearer tokens as the authorization mechanism for other API endpoint requests. This authorization will be a requirement to use the Track API.

Track API

We can see that the Track API has an endpoint that the customer can provide their FedEx® tracking number to and FedEx® will return the most up-to-date tracking information available.

Identify key API prerequisites, inputs and outputs

Taking a more detailed look at the API schema and examples, we can identify the key request and response data required to meet the business requirements. Please find a summary below that you can

API Authorization

Details obtained from the API Authorization documentation page.

Type Location Key Value
Request HTTP request header Content-Type application/x-www-form-urlencoded
HTTP request form field grant_type client_credentials
client_id FedEx® customer API Key
client_secret FedEx® customer Secret Key
Success HTTP response status Status code 200
JSON response body $.access_token System generated ${access_token}
Error HTTP response status Status code 4xx/5xx
JSON response body $.errors[0].message Human readable error message
Track API

Details obtained from the Track API documentation page.

Type Location Key Value
Request HTTP request header Authorization Bearer ${access_token}
Content-Type application/json
x-locale Locale to format messages in
JSON request body $..trackingNumber Tracking number to search for
Success HTTP response status Status code 200
JSON response body $..scanEvents[0].derivedStatus Tracking status message
$..scanEvents[0].date Tracking status date and time
Error HTTP response status Status code 4xx/5xx
JSON response body $.errors[0].message Human readable error message
JSON response body $..trackResults[0].error.message Human readable error message

Development

To integrate our application with the Track API we will need to follow these steps:

  1. Register for access to the FedEx® API.
  2. Manually verify API behaviour.
  3. Use code samples to write API integration code.

Register for access to the FedEx® API

You will need to register to obtain an API key for the test environment.

  1. Sign up for an account on the FedEx® Developer Portal page
    • Click
      FedEx Sign Up or Log In
    • Click
      FedEx Sign Up or Log In
    • Complete the registration form
  2. Create an organization after registration or by visiting the FedEx® create organization page
    • Name your organization
    • Click FedEx Create
  3. Create a project on the FedEx® projects page
    • Click Create Project
    • Select the Track API
    • Select any countries you plan to ship within
    • Complete the rest of the create project form
  4. Take note of the Test Key details

Manually verify API behaviour

Before we start writing integration code, we can use an HTTP REST API client tool such as Postman to make manual HTTP requests to the API and verify the behaviour we expect.

To help you get started, we have a sample Postman collection which you can import into your own workspace.

Obtaining an API authentication token

First, we need to verify that we can successfully authenticate with our API Key and Secret Key by submitting an OAuth 2.0 request to https://apis-sandbox.fedex.com/oauth/token as follows.

  1. Import the sample Postman collection
    • Click Run in Postman
    • Choose Postman for Web
    • Select your workspace and click Postman Import button
  2. Edit the FedEx Test Environment Template
    • Navigate to the environment
    • Set the FedExBaseURL current value to https://apis-sandbox.fedex.com
    • Set the FedExApiKey current value to your FedEx® test environment API Key
    • Set the FedExSecretKey current value to your FedEx® test environment Secret Key
    • Click Postman save button
  3. Obtain an OAuth 2.0 token
    • Navigate to the collection
    • Choose FedEx Test Environment Template from the environment dropdown
    • Scroll down and click Get New Access Token Button under the "Configure New Token" section
    • Wait for authentication
    • Click Use Token Button
    • You have now obtained an OAuth 2.0 token that can be used for other API requests
Making a test request

Next, we can verify that we can successfully make a test request to the Track API by submitting a request to https://apis-sandbox.fedex.com/track/v1/trackingnumbers as follows.

  1. Navigate to the collection and click on the Track By Tracking Number request
  2. Click Send Button to send a sample request to the FedEx® test sandbox environment
  3. You have now performed your first FedEx® Track API request!
  4. You can produce other responses, for example by changing the tracking number to an empty one to produce an error response
Using test requests during development

Now we have verified that we can interact with the Track API manually, we are ready to start writing our integration code.

If you encounter issues during development, you can send manual test requests to verify that the issue is not with your own code.

Use code samples to write API integration code

Now we are ready to write API integration code so that our sample application can communicate with the FedEx® Track API.

Integrating using the sample code

The Track API documentation contains the API schema, requests, responses and sample code to help us get started.

Traffic Parrot sample tracking application code contains an example of how to use the Java code samples to write a real integration with the Track API.

package com.trafficparrot.example.fedex.tracking;

import com.jayway.jsonpath.Configuration;
import com.jayway.jsonpath.DocumentContext;
import com.jayway.jsonpath.JsonPath;
import okhttp3.*;
import okhttp3.logging.HttpLoggingInterceptor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import static com.jayway.jsonpath.Option.SUPPRESS_EXCEPTIONS;
import static okhttp3.logging.HttpLoggingInterceptor.Level.BODY;
import static org.apache.commons.lang3.StringUtils.isNotBlank;

@Service
public class FedExTrackingService {
    private static final Logger LOGGER = LoggerFactory.getLogger(FedExTrackingService.class);
    private static final String FEDEX_JSON_PATH_FIRST_TRACK_RESULT = "$.output.completeTrackResults[0].trackResults[0]";
    private static final String FEDEX_JSON_PATH_TRACK_MOST_RECENT_SCAN_EVENT = FEDEX_JSON_PATH_FIRST_TRACK_RESULT + ".scanEvents[0]";
    private static final String FEDEX_JSON_PATH_ERROR_MESSAGE = "$.errors[0].message";

    private final FedExApiCredentials fedExApiCredentials;

    @Autowired
    public FedExTrackingService(FedExApiCredentials fedExApiCredentials) {
        this.fedExApiCredentials = fedExApiCredentials;
    }

    public boolean isLoggedIn() {
        return fedExApiCredentials.hasApiCredentials();
    }

    public LogInResponse loginToApi(String apiKey, String secretKey, String apiBaseUrl) {
        try {
            OkHttpClient client = okHttpClient();
            RequestBody body = new FormBody.Builder()
                    .add("grant_type", "client_credentials")
                    .add("client_id", apiKey)
                    .add("client_secret", secretKey)
                    .build();
            Request request = new Request.Builder()
                    .url(apiBaseUrl + "/oauth/token")
                    .post(body)
                    .addHeader("Content-Type", "application/x-www-form-urlencoded")
                    .build();

            Response response = client.newCall(request).execute();
            ResponseBody responseBody = response.body();
            if (responseBody == null) {
                LOGGER.error("Missing response body");
                return logInTechnicalErrorResponse();
            }
            DocumentContext responseJson = parseJson(responseBody.string());
            if (response.code() == 200) {
                String accessToken = responseJson.read("$.access_token");
                fedExApiCredentials.setApiCredentials(apiBaseUrl, accessToken);
                return new LogInResponse(false, "");
            } else {
                String errorMessage = responseJson.read(FEDEX_JSON_PATH_ERROR_MESSAGE);
                return new LogInResponse(true, errorMessage);
            }
        } catch (Exception e) {
            LOGGER.error("Uncaught exception", e);
            return logInTechnicalErrorResponse();
        }
    }

    public TrackingResponse trackByTrackingNumber(String trackingNumber) {
        String requestJson = "{\n" +
                "  \"includeDetailedScans\": true,\n" +
                "  \"trackingInfo\": [\n" +
                "    {\n" +
                "      \"trackingNumberInfo\": {\n" +
                "        \"trackingNumber\": \"" + trackingNumber + "\"\n" +
                "      }\n" +
                "    }\n" +
                "  ]\n" +
                "}";
        try {
            OkHttpClient client = okHttpClient();
            MediaType mediaType = MediaType.parse("application/json");
            RequestBody body = RequestBody.create(mediaType, requestJson);
            Request request = new Request.Builder()
                    .url(fedExApiCredentials.getApiBaseUrl() + "/track/v1/trackingnumbers")
                    .post(body)
                    .addHeader("Content-Type", "application/json")
                    .addHeader("X-locale", "en_US")
                    .addHeader("Authorization", "Bearer " + fedExApiCredentials.getBearerToken())
                    .build();

            Response response = client.newCall(request).execute();
            ResponseBody responseBody = response.body();
            if (responseBody == null) {
                LOGGER.error("Missing response body");
                return trackingTechnicalErrorResponse();
            }
            DocumentContext responseJson = parseJson(responseBody.string());
            if (response.code() == 200) {
                String errorMessage = responseJson.read(FEDEX_JSON_PATH_FIRST_TRACK_RESULT + ".error.message");
                if (isNotBlank(errorMessage)) {
                    return new TrackingResponse(true, errorMessage, "");
                }

                String derivedStatus = responseJson.read(FEDEX_JSON_PATH_TRACK_MOST_RECENT_SCAN_EVENT + ".derivedStatus");
                String date = responseJson.read(FEDEX_JSON_PATH_TRACK_MOST_RECENT_SCAN_EVENT + ".date");
                return new TrackingResponse(false, "", derivedStatus + " at " + date);
            } else {
                String errorMessage = responseJson.read(FEDEX_JSON_PATH_ERROR_MESSAGE);
                return new TrackingResponse(true, errorMessage, "");
            }
        } catch (Exception e) {
            LOGGER.error("Uncaught exception", e);
            return trackingTechnicalErrorResponse();
        }
    }

    private DocumentContext parseJson(String responseBody) {
        return JsonPath.parse(responseBody, Configuration.builder().options(SUPPRESS_EXCEPTIONS).build());
    }

    private TrackingResponse trackingTechnicalErrorResponse() {
        return new TrackingResponse(true, "Technical error, see logs for details", "");
    }

    private LogInResponse logInTechnicalErrorResponse() {
        return new LogInResponse(true, "Technical error, see logs for details");
    }

    private OkHttpClient okHttpClient() {
        return new OkHttpClient.Builder()
                .addInterceptor(new HttpLoggingInterceptor().setLevel(BODY))
                .build();
    }
}
Key points:
  • We store the FedExApiCredentials on login by saving the OAuth 2.0 bearer token for later use in the Authorization HTTP header field.
  • We make use of the tolerant reader pattern via the use of JSONPath expressions to parse only the fields that we are specifically interested in.

This code is used as part of the following code architecture:

You can find the complete sample application on the public Traffic Parrot GitHub repository for more details or to build and run a copy yourself to help you get started with your own integration project.

Beyond the sample application code

There are additional points to consider that are beyond the scope of the sample application code. Please find a summary below.

  • There is an API rate limit of 750 transactions every 10 seconds. If you think your application will exceed these limits, you should write additional client side code to throttle API requests made so the limit is not hit.
  • Consider logging all requests and responses to the API, this will help a lot if you have to debug an issue since you will be able to see the raw request and response data that was sent and received.
  • Consider using the x-customer-transaction-id header to help correlate requests and responses in the logs.
  • If your application is multi-user or otherwise concurrent, be sure to make sure shared memory (e.g. the OAuth 2.0 token) is thread safe, to avoid race conditions.
  • Handle expiry of API authorization by requesting a new OAuth2.0 token, ideally before the expiry period.
  • Review the official FedEx APIs Integration Best Practices guide, in particular taking note of:
    • Keeping youAPI Key and Secret Key private and secure.
    • Validating input before sending requests to the API.
    • Narrowing searches using filters such as date ranges.

Testing

We can employ a variety of different testing techniques that each have pros and cons.

Below you will find details on how to apply five common testing techniques.

Technique Repeatable Automated Initial Cost Ongoing Cost Detects Logical Issues Detects Integration Issues
Manual exploratory testing
Manual regression testing
Automated unit testing of key business logic
Automated integration testing of API usage
Automated end to end acceptance testing of application

Manual exploratory testing

This is one of the most common testing techniques and simply involves running a copy of your application and testing it manually through the same interface that the end users would use.

The testing cycle typically involves:
  1. Start up a copy of the system under test (e.g. release in a UAT shared environment or start up locally via Docker).
  2. Ensure access to any API dependencies (e.g. using real APIs or API sandboxes or custom virtual services).
  3. Use the interface to manually explore how the system behaves (e.g. using a web browser).
  4. Observe the system log files for more details when errors occur (e.g. using the command line or tools like Splunk).
  5. Write down manual steps to begin to form a manual regression test suite (e.g. in a document or spreadsheet).
  6. Use additional tools to supplement the discovery process (e.g. to manually check the behaviour of APIs).
  7. Raise bug reports when unexpected behaviour is discovered (e.g. in a ticketing system like Jira or Trello).
Pros:
  • Discover bugs and behaviour that was not expected.
  • Get started without writing additional test code.
  • No coding required.
Cons:
  • Not repeatable, the testing session is performed without a specific test plan to follow.
  • Requires reliable real APIs or sandboxes or virtual services for all dependencies.
  • Time consuming to perform.

Manual regression testing

Much like manual exploratory testing this involves running a copy of your application and testing it manually through the same interface that the end users would use.

The difference is that a pre-defined test plan is followed, to check that expected behaviour has not changed since the last release.

Test plans can be written using a BDD given/when/then approach or as a list of steps to perform and check.

The testing cycle typically involves:
  1. Write down manual steps to form a manual regression test suite (e.g. in a document or spreadsheet).
  2. Start up a copy of the system under test (e.g. release in a UAT shared environment or start up locally via Docker).
  3. Ensure access to any API dependencies (e.g. using real APIs or API sandboxes or custom virtual services).
  4. Follow the manual steps in each of the test plans and verify the expected behaviour.
  5. Observe the system log files for more details when errors occur (e.g. using the command line or tools like Splunk).
  6. Raise bug reports when unexpected behaviour is discovered (e.g. in a ticketing system like Jira or Trello).
Pros:
  • Repeatable, the testing session is performed with a specific test plan to follow.
  • Get started without writing additional test code.
  • No coding required.
Cons:
  • Requires reliable real APIs or sandboxes or virtual services for all dependencies.
  • Time consuming to perform.

Automated unit testing of key business logic

Automated unit testing involves taking a subset of your application code and writing automated tests for it in the same language.

Tests are automatically run as part of a CI/CD build every time the code changes.

Here is some sample code that is used to unit test business logic in isolation.

    @Test
    void postTrackSuccessDisplaysStatusOnTrackPage() {
        given(trackingService.isLoggedIn()).willReturn(true);
        given(trackingService.trackByTrackingNumber(TRACKING_NUMBER)).willReturn(new TrackingResponse(false, EXAMPLE_ERROR_MESSAGE, EXAMPLE_STATUS));

        String page = trackingController.postTrack(TRACKING_NUMBER, model);

        then(model).should().addAttribute("status", EXAMPLE_STATUS);
        assertThat(page).isEqualTo("track");
    }

    @Test
    void postTrackErrorDisplaysErrorOnTrackPage() {
        given(trackingService.isLoggedIn()).willReturn(true);
        given(trackingService.trackByTrackingNumber(TRACKING_NUMBER)).willReturn(new TrackingResponse(true, EXAMPLE_ERROR_MESSAGE, ""));

        String page = trackingController.postTrack(TRACKING_NUMBER, model);

        then(model).should().addAttribute("error", EXAMPLE_ERROR_MESSAGE);
        assertThat(page).isEqualTo("track");
    }
The testing cycle typically involves:
  1. Specify tests based on business requirements (e.g. using a BDD given/when/then approach).
  2. Write test code in the same language as the application code (e.g. using a unit testing framework such as JUnit).
  3. Isolate the unit of application code under test (e.g. by using in memory mocks such as Mockito for code dependencies).
  4. Run tests and fix any issues until they are green (e.g. using a red/green/refactor approach).
  5. Commit test code to the same version control repository that the application code is stored in (e.g. Git or SVN).
  6. Run code automatically before each release, typically triggered per commit in CI/CD (e.g. using a Jenkins or TeamCity build).
  7. Fail the CI/CD build when there are test failures and prevent the build being used for a production release.
Pros:
  • Write tests once only.
  • Easy to run the tests as an automated regression suite each time the code changes.
  • Small pieces of the application code can be tested in isolation.
  • Tests are typically cheap to write, debug and maintain.
Cons:
  • Assumptions about the behaviour of code outside the unit under test must be made carefully to avoid misleading results.
  • Maintenance required when the application code is refactored.

Automated integration testing of API usage

Much like automated unit testing this involves taking a subset of your application code and writing automated tests for it in the same language.

The difference is that the area of focus is code that integrates with APIs using hardware resources such as disk or network based APIs, which are

Here is some sample code that is used to unit test API integration in isolation.

    @Test
    void trackSuccess() {
        logInSuccess();

        TrackingResponse trackingResponse = trackingService.trackByTrackingNumber("123456789012");

        assertThat(trackingResponse.isError).isFalse();
        assertThat(trackingResponse.errorMessage).isEmpty();
        assertThat(trackingResponse.latestStatus).matches("Picked up at 2019-08-14T13:33:00-04:00");
    }

    @Test
    void trackFailure() {
        logInSuccess();

        TrackingResponse trackingResponse = trackingService.trackByTrackingNumber("");

        assertThat(trackingResponse.isError).isTrue();
        assertThat(trackingResponse.errorMessage).isEqualTo("Please provide tracking number.");
        assertThat(trackingResponse.latestStatus).isEmpty();
    }

    void logInSuccess() {
        LogInResponse logInResponse = trackingService.loginToApi(FED_EX_API_USER.apiKey, FED_EX_API_USER.secretKey.value, FED_EX_API_USER.baseUrl);

        assertThat(logInResponse.isError).isFalse();
        assertThat(logInResponse.errorMessage).isEmpty();
        assertThat(trackingService.isLoggedIn()).isTrue();
    }
The testing cycle typically involves:
  1. Specify tests based on business requirements (e.g. using a BDD given/when/then approach).
  2. Write test code in the same language as the application code (e.g. using a unit testing framework such as JUnit).
  3. Ensure access to any API dependencies (e.g. using real APIs or API sandboxes or custom virtual services).
  4. Run tests and fix any issues until they are green (e.g. using a red/green/refactor approach).
  5. Commit test code to the same version control repository that the application code is stored in (e.g. Git or SVN).
  6. Run code automatically before each release, typically triggered per commit in CI/CD (e.g. using a Jenkins or TeamCity build).
  7. Fail the CI/CD build when there are test failures and prevent the build being used for a production release.
Pros:
  • Write tests once only.
  • Easy to run the tests as an automated regression suite each time the code changes.
  • Small pieces of the application code can be tested in isolation.
  • Tests are typically cheap to write, debug and maintain.
Cons:
  • Maintenance required when the application code is refactored.
  • Requires reliable real APIs or sandboxes or virtual services for all dependencies.

Automated end to end acceptance testing of application

Much like manual regression testing this involves running a copy of your application and testing it through the same interface that the end users would use.

The difference is that the tests are automated instead of manual.

Here is an example report that can be produced using an automated acceptance testing approach.

Here is some sample code that is used to automate end to end acceptance testing.

    @Test
    void trackUsingTrackingNumberSuccess() {
        givenUserHasLoggedIn(FED_EX_API_USER);
        whenTheUserLooksUpTrackingNumber("123456789012");
        thenTheLatestTrackingStatusIsDisplayed("Picked up at 2019-08-14T13:33:00-04:00");
    }

    @Test
    void trackUsingTrackingNumberFailure() {
        givenUserHasLoggedIn(FED_EX_API_USER);
        whenTheUserLooksUpTrackingNumber("");
        thenAnErrorMessageIsDisplayed("Please provide tracking number.");
    }
The testing cycle typically involves:
  1. Specify tests based on business requirements (e.g. using a BDD given/when/then approach)
  2. Write test code in any language (e.g. using a unit testing framework such as JUnit)
  3. Start up a copy of the system under test (e.g. release in a CI/CD automation environment or start up locally via Docker).
  4. Ensure access to any API dependencies (e.g. using real APIs or API sandboxes or custom virtual services).
  5. Run tests and fix any issues until they are green (e.g. using a red/green/refactor approach)
  6. Commit test code to the same version control repository that the application code is stored in (e.g. Git or SVN)
  7. Run code automatically before each release, typically triggered per commit in CI/CD (e.g. using a Jenkins or TeamCity build)
  8. Produce reports that document the behaviour of the system (e.g. using Cucumber or Allure)
  9. Fail the CI/CD build when there are test failures and prevent the build being used for a production release
Pros:
  • Write tests once only.
  • Easy to run the tests as an automated regression suite each time the code changes.
  • Produce human-readable reports that specify the behaviour of the application.
Cons:
  • Tests are typically costly to write, debug, run and maintain.
  • Requires reliable real APIs or sandboxes or virtual services for all dependencies.

Production

Outside of the test environments, there are additional points to consider when moving to production.

Release checklist

Before being allowed to use the production APIs, FedEx® has an API Certification process to follow for some APIs, follow the steps in the official guide to enable your production API key.

If you need assistance getting your API Certification, Traffic Parrot offers consulting services where we can walk you through any issues you encounter in becoming production ready.

After this, here is a sample release checklist that you can use to help decide if a release is ready to go to production.

  • Automated regression test suites passed successfully.
  • Manual regression test suites passed successfully.
  • Manual exploratory testing has been performed on any new features.
  • Automated tests were written for any new feature areas.
  • For new API integrations, testing has been performed with the real API.
  • The impact of known issues and possible workarounds has been explored.
  • The production application configuration uses the production API URL.

Monitoring and observability

In a production environment, it can be critical to watch, understand, inspect and debug the running system to make sure things are working as expected and respond effectively when they are not.

When it comes to monitoring API integrations, the following are important:
  • Logs of API requests and responses, including the complete message headers and body.
  • Use Correlation IDs to make it possible to search for the corresponding response to a particular request.
  • Alerts when API error rates are occurring more than usual, which could indicate a bug or an issue with the API.
  • Alerts when API response times are larger than usual, which could indicate a network slowdown or an issue with the API.

Change management

The FedEx® APIs use semantic versioningto manage API versions.

Each version is represented by the version format of major.minor (e.g. Ship API 1.1).
  • A new major version means that the change is not backwards-compatible.
  • A new minor version means that the change is backwards-compatible change.
To mitigate the impact of API consumer or producer changes you can:
  • Code using the Tolerant Reader pattern.
  • Integration test with the real API before each release of your application.
  • Stay on the FedEx® API mailing list to be notified of major changes.

Integration support options

Traffic Parrot sandbox

FedEx® sandbox

Common integration issues

Here are some of the most common integration issues FedEx® API users face with suggested workarounds or solutions.

Service unavailable

HTTP/1.1 503 Service Unavailable
X-API-Mode: Sandbox
Content-Encoding: gzip
Content-Type: application/json;charset=UTF-8
Content-Length: 236
Server: Layer7-API-Gateway
Date: Mon, 18 Apr 2022 13:59:07 GMT
Connection: close
Vary: Accept-Encoding
Server-Timing: cdn-cache; desc=MISS
Server-Timing: edge; dur=105
Server-Timing: origin; dur=6067
Body:{
    "transactionId": "18b5df2b-2786-41f3-83e2-22f1307f2672",
    "errors": [
        {
            "code": "SERVICE.UNAVAILABLE.ERROR",
            "message": "The service is currently unavailable and we are working to resolve the issue. We apologize for any inconvenience. Please check back at a later time."
        }
    ]
}
Problem

FedEx® sandbox APIs are intermittently unavailable, resulting in error messages that block testing.

Solution

Traffic Parrot FedEx® sandbox APIs solve this issue by being highly available for our customers.

Unknown tracking number

HTTP/1.1 200 OK
X-API-Mode: Sandbox
Content-Encoding: gzip
Content-Type: application/json
Content-Length: 301
Server: Layer7-API-Gateway
Date: Tue, 19 Apr 2022 12:29:04 GMT
Connection: keep-alive
Vary: Accept-Encoding
Set-Cookie: JSESSIONID=7A3EB75A8A39D3F4C5231A03786A1EC8; Path=/track/v1; Secure; HttpOnly; Domain=apis-sandbox.fedex.com
Set-Cookie: __VCAP_ID__=e063ffd3-0bf4-41a1-6598-4e4f; Path=/track/v1; HttpOnly; Secure; Domain=apis-sandbox.fedex.com
Server-Timing: cdn-cache; desc=MISS
Server-Timing: edge; dur=93
Server-Timing: origin; dur=197
Body:{
    "transactionId": "a6a9dbff-7d26-477d-bdbd-688645a9585f",
    "customerTransactionId": "22467615-2115-4c0b-b169-fade80e11aec",
    "output": {
        "completeTrackResults": [
            {
                "trackingNumber": "920241085725456",
                "trackResults": [
                    {
                        "trackingNumberInfo": {
                            "trackingNumber": "920241085725456",
                            "trackingNumberUniqueId": "",
                            "carrierCode": ""
                        },
                        "error": {
                            "code": "TRACKING.TRACKINGNUMBER.NOTFOUND",
                            "message": "Tracking number cannot be found. Please correct the tracking number and try again."
                        }
                    }
                ]
            }
        ]
    }
}
Problem

FedEx® sandbox API mock tracking numbers are intermittently not found, resulting in error messages that block testing.

Sometimes the issue is genuine and an unsupported tracking number has been provided, which can be fixed by providing a supported tracking number.

Solution

Traffic Parrot FedEx® sandbox APIs solve this issue by having predictable, repeatable behaviour for our customers.

System unavailable

HTTP/1.1 200 OK
X-API-Mode: Sandbox
Content-Encoding: gzip
Content-Type: application/json
Content-Length: 262
Server: Layer7-API-Gateway
Date: Tue, 19 Apr 2022 12:31:05 GMT
Connection: keep-alive
Vary: Accept-Encoding
Set-Cookie: JSESSIONID=1CA447C0AC9255093CB82E169C292684; Path=/track/v1; Secure; HttpOnly; Domain=apis-sandbox.fedex.com
Set-Cookie: __VCAP_ID__=f54d4735-4d59-46e2-7967-6877; Path=/track/v1; HttpOnly; Secure; Domain=apis-sandbox.fedex.com
Server-Timing: cdn-cache; desc=MISS
Server-Timing: edge; dur=92
Server-Timing: origin; dur=5235
Body:{
    "transactionId": "1378803b-d312-4014-af90-a7ad734dd4a8",
    "customerTransactionId": "c5dd0962-7c96-436d-8a6b-35dea6c32f08",
    "output": {
        "completeTrackResults": [
            {
                "trackingNumber": "568838414941",
                "trackResults": [
                    {
                        "trackingNumberInfo": {
                            "trackingNumber": "568838414941",
                            "trackingNumberUniqueId": "",
                            "carrierCode": ""
                        },
                        "error": {
                            "code": "SYSTEM.UNAVAILABLE.EXCEPTION",
                            "message": "default.error"
                        }
                    }
                ]
            }
        ]
    }
}
Problem

FedEx® sandbox APIs are intermittently unavailable, resulting in error messages that block testing.

Solution

Traffic Parrot FedEx® sandbox APIs solve this issue by being highly available for our customers.

Invalid account number

Source: Fedex API Rates and Transit Times API + LTL Freight: Account Number Invalid

Problem

FedEx Rates and Transit Times API reports error ACCOUNT.NUMBER.INVALID

Solution

Use FedEx® production APIs or Traffic Parrot FedEx® sandbox APIs, which do not have this issue.

Lacking clarity which credentials to use

Source: Rating is temporarily unavailable - Error in FEDEX

Problem

FedEx® Rate Web Service reports error Rating is temporarily unavailable, please try again later.

Solution

Use FedEx® production APIs or Traffic Parrot FedEx® sandbox APIs, which do not have this issue.

General failure when servers are down

Source: fedex General Failure

Problem

When the FedEx® servers are down, you get General failure as the error message.

Solution

Use FedEx® production APIs or Traffic Parrot FedEx® sandbox APIs, which do not have this issue.

FedEx® Sandbox has limited number of test credentials

Source: fedex shipping web service, test account authentication failed

Problem

FedEx® API credentials work for some services but not others.

Solution
Check that:
  • Your sandbox account is registered for each API you wish to use.
  • You are connecting to the correct API URL.
  • Your credentials are valid with the FedEx support team.

Or use Traffic Parrot FedEx® sandbox APIs, which do not have this issue.

FedEx® shipping codes are not clear

Source: FedEx Shipping Codes

Problem

Specific API code values must be used as shipping codes.

Solution

Check the FedEx API reference for the valid codes to use.

The courtesy rates service intermittent availability

Source: Unable to obtain courtesy rates in Fed Ex Web Service

Problem

FedEx® sandbox APIs are intermittently unavailable, resulting in error messages that block testing such as Unable to obtain courtesy rates.

Solution

Traffic Parrot FedEx® sandbox APIs solve this issue by being highly available for our customers.

Lack of clarity on available test data

Source: How to get Fedex testing tracking number?

Problem

Which tracking numbers can I use in the test environment?

Solution

See the FedEx® API Reference page for information on supported mock tracking numbers.

Or use Traffic Parrot FedEx® sandbox APIs, which contain mock tracking numbers to cover even more scenarios, such as API error cases.