Commit 0ef92871 authored by Alexey Suhov's avatar Alexey Suhov Committed by openvino-pushbot
Browse files

Publishing 2019 R1.1 content and Myriad plugin sources (#162)

* Publishing 2019 R1.1 content and Myriad plugin sources
Showing with 658 additions and 89 deletions
+658 -89
......@@ -15,7 +15,12 @@ Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](
## Documentation
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* Inference Engine [build instructions](inference-engine/README.md)
* [Inference Engine build instructions](inference-engine/README.md)
* [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
* [Introduction to Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
## How to Contribute
We welcome community contributions to the Deep Learning Deployment Toolkit repository. If you have an idea how to improve the product, please share it with us doing the following steps:
......
# Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux*
This guide provides you with the information that will help you to start using the DLDT on Linux*. With this guide you will learn how to:
1. [Configure the Model Optimizer](#configure-the-model-optimizer)
2. [Prepare a model for sample inference:](#prepare-a-model-for-sample-inference)
1. [Download a pre-trained model](#download-a-trained-model)
2. [Convert the model to an Intermediate Representation (IR) with the Model Optimizer](#convert-the-model-to-an-intermediate-representation-with-the-model-optimizer)
3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)
## Prerequisites
1. This guide assumes that you have already cloned the `dldt` repo and successfully built the Inference Engine and Samples using the [build instructions](inference-engine/README.md).
2. The original structure of the repository directories is kept unchanged.
> **NOTE**: Below, the directory to which the `dldt` repository is cloned is referred to as `<DLDT_DIR>`.
## Configure the Model Optimizer
The Model Optimizer is a Python\*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe\*, TensorFlow\*, Apache MXNet\*, ONNX\* and Kaldi\*.
You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:
- `.xml`: Describes the network topology
- `.bin`: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
### Model Optimizer Configuration Steps
You can choose to either configure all supported frameworks at once **OR** configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
> **NOTE**: Since the TensorFlow framework is not officially supported on CentOS*, the Model Optimizer for TensorFlow can't be configured and ran on those systems.
> **IMPORTANT**: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
**Option 1: Configure all supported frameworks at the same time**
1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe,
TensorFlow, MXNet, Kaldi\*, and ONNX:
```sh
sudo ./install_prerequisites.sh
```
**Option 2: Configure each framework separately**
Configure individual frameworks separately **ONLY** if you did not select **Option 1** above.
1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:
- For **Caffe**:
```sh
sudo ./install_prerequisites_caffe.sh
```
- For **TensorFlow**:
```sh
sudo ./install_prerequisites_tf.sh
```
- For **MXNet**:
```sh
sudo ./install_prerequisites_mxnet.sh
```
- For **ONNX**:
```sh
sudo ./install_prerequisites_onnx.sh
```
- For **Kaldi**:
```sh
sudo ./install_prerequisites_kaldi.sh
```
The Model Optimizer is configured for one or more frameworks. Continue to the next session to download and prepare a model for running a sample inference.
## Prepare a Model for Sample Inference
This paragraph contains the steps to get the pre-trained model for sample inference and to prepare the model's optimized Intermediate Representation that Inference Engine uses.
### Download a Trained Model
To run the Image Classification Sample you'll need a pre-trained model to run the inference on. This guide will use the public SqueezeNet 1.1 Caffe* model. You can find and download this model manually or use the OpenVINO™ [Model Downloader](https://github.com/opencv/open_model_zoo/tree/master/model_downloader).
With the Model Downloader, you can download other popular public deep learning topologies and the [OpenVINO™ pre-trained models](https://github.com/opencv/open_model_zoo/tree/master/intel_models) prepared for running inference for a wide list of inference scenarios: object detection, object recognition, object re-identification, human pose estimation, action recognition and others.
To download the SqueezeNet 1.1 Caffe* model to a models folder with the Model Downloader:
1. Install the [prerequisites](https://github.com/opencv/open_model_zoo/tree/master/model_downloader#prerequisites).
2. Run the `downloader.py` with specifying the topology name and a `<models_dir>` path. For example to download the model to the `~/public_models` directory:
```sh
./downloader.py --name squeezenet1.1 --output_dir ~/public_models
```
When the model files are successfully downloaded the output similar to the following is printed:
```sh
###############|| Downloading topologies ||###############
========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.prototxt
========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
###############|| Post processing ||###############
========= Changing input dimensions in squeezenet1.1.prototxt =========
```
### Convert the model to an Intermediate Representation with the Model Optimizer
> **NOTE**: This section assumes that you have configured the Model Optimizer using the instructions from the [Configure the Model Optimizer](#configure-the-model-optimizer) section.
1. Create a `<ir_dir>` directory that will contains the Intermediate Representation (IR) of the model.
2. Inference Engine can perform inference on a [list of supported devices](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html) using specific device plugins. Different plugins support models of [different precision formats](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_model_formats), such as FP32, FP16, INT8. To prepare an IR to run inference on a particular hardware, run the Model Optimizer with the appropriate `--data_type` options:
**For CPU (FP32):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
```
**For GPU and MYRIAD (FP16):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
```
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.
3. Copy the `squeezenet1.1.labels` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/` to the model IR directory. This file contains the classes that ImageNet uses so that the inference results show text instead of classification numbers:
```sh
cp <DLDT_DIR>/inference-engine/samples/sample_data/squeezenet1.1.labels <ir_dir>
```
Now you are ready to run the Image Classification Sample Application.
## Run the Image Classification Sample Application
The Inference Engine sample applications are automatically compiled when you built the Inference Engine using the [build instructions](inference-engine/README.md). The binary files are located in the `<DLDT_DIR>/inference-engine/bin/intel64/Release` directory.
Follow the steps below to run the Image Classification sample application on the prepared IR and with an input image:
1. Go to the samples build directory:
```sh
cd <DLDT_DIR>/inference-engine/bin/intel64/Release
```
2. Run the sample executable with specifying the `car.png` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/` directory as an input image, the IR of your model and a plugin for a hardware device to perform inference on:
**For CPU:**
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
```
**For GPU:**
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
```
**For MYRIAD:**
>**NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires performing [additional hardware configuration steps](inference-engine/README.md#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2).
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
```
When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
```sh
Top 10 results:
Image /home/user/dldt/inference-engine/samples/sample_data/car.png
classid probability label
------- ----------- -----
817 0.8363345 sports car, sport car
511 0.0946488 convertible
479 0.0419131 car wheel
751 0.0091071 racer, race car, racing car
436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 minivan
586 0.0025741 half track
717 0.0016069 pickup, pickup truck
864 0.0012027 tow truck, tow car, wrecker
581 0.0005882 grille, radiator grille
total inference time: 2.6642941
Average running time of one iteration: 2.6642941 ms
Throughput: 375.3339402 FPS
[ INFO ] Execution successful
```
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Inference Engine build instructions](inference-engine/README.md)
* [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
* [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html).
......@@ -2,7 +2,7 @@
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 3.8 FATAL_ERROR)
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
project(InferenceEngine)
......
This diff is collapsed.
......@@ -30,6 +30,7 @@ endif()
if (APPLE)
set(ENABLE_GNA OFF)
set(ENABLE_CLDNN OFF)
SET(ENABLE_MYRIAD OFF)
endif()
......@@ -60,6 +61,14 @@ if (NOT ENABLE_MKL_DNN)
set(ENABLE_MKL OFF)
endif()
if (NOT ENABLE_VPU)
set(ENABLE_MYRIAD OFF)
endif()
if (NOT ENABLE_MYRIAD)
set(ENABLE_VPU OFF)
endif()
#next section set defines to be accesible in c++/c code for certain feature
if (ENABLE_PROFILING_RAW)
add_definitions(-DENABLE_PROFILING_RAW=1)
......@@ -69,6 +78,22 @@ if (ENABLE_CLDNN)
add_definitions(-DENABLE_CLDNN=1)
endif()
if (ENABLE_MYRIAD)
add_definitions(-DENABLE_MYRIAD=1)
endif()
if (ENABLE_MYX_PCIE AND ENABLE_MYRIAD)
add_definitions(-DENABLE_MYX_PCIE=1)
endif()
if (ENABLE_MYRIAD_NO_BOOT AND ENABLE_MYRIAD )
add_definitions(-DENABLE_MYRIAD_NO_BOOT=1)
endif()
if (ENABLE_MYX_PCIE AND ENABLE_MYRIAD_NO_BOOT)
message(FATAL_ERROR "ENABLE_MYX_PCIE and ENABLE_MYRIAD_NO_BOOT can't be enabled at the same time")
endif()
if (ENABLE_MKL_DNN)
add_definitions(-DENABLE_MKL_DNN=1)
endif()
......
......@@ -37,6 +37,24 @@ else()
set(MODELS_BRANCH "master")
endif()
if (ENABLE_MYRIAD)
RESOLVE_DEPENDENCY(VPU_FIRMWARE_MA2450
ARCHIVE_UNIFIED firmware_ma2450_491.zip
TARGET_PATH "${TEMP}/vpu/firmware/ma2450"
ENVIRONMENT "VPU_FIRMWARE_MA2450"
FOLDER)
debug_message(STATUS "ma2450=" ${VPU_FIRMWARE_MA2450})
endif ()
if (ENABLE_MYRIAD)
RESOLVE_DEPENDENCY(VPU_FIRMWARE_MA2480
ARCHIVE_UNIFIED firmware_ma2480_mdk_R7_9.zip
TARGET_PATH "${TEMP}/vpu/firmware/ma2480"
ENVIRONMENT "VPU_FIRMWARE_MA2480"
FOLDER)
debug_message(STATUS "ma2480=" ${VPU_FIRMWARE_MA2480})
endif ()
## enable cblas_gemm from OpenBLAS package
if (GEMM STREQUAL "OPENBLAS")
if(NOT BLAS_LIBRARIES OR NOT BLAS_INCLUDE_DIRS)
......
......@@ -62,6 +62,14 @@ list (APPEND IE_OPTIONS IE_DEBUG_POSTFIX)
set(IE_RELEASE_POSTFIX "${IE_RELEASE_POSTFIX}" CACHE STRING "Release postfix" FORCE)
list (APPEND IE_OPTIONS IE_RELEASE_POSTFIX)
ie_option (ENABLE_VPU "vpu targeted plugins for inference engine" ON)
ie_option (ENABLE_MYRIAD "myriad targeted plugin for inference engine" ON)
ie_option (ENABLE_MYX_PCIE "myriad plugin with support PCIE device" OFF)
ie_option (ENABLE_MYRIAD_NO_BOOT "myriad plugin will skip device boot" OFF)
ie_option (ENABLE_TESTS "unit and functional tests" OFF)
ie_option (ENABLE_GAPI_TESTS "unit tests for GAPI kernels" OFF)
......
......@@ -8,7 +8,7 @@ This topic demonstrates how to run the Benchmark Application demo, which perform
Upon the start-up, the application reads command-line parameters and loads a network and images to the Inference Engine plugin. The number of infer requests and execution approach depend on a mode defined with the `-api` command-line parameter.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
### Synchronous API
For synchronous mode, the primary metric is latency. The application creates one infer request and executes the `Infer` method. A number of executions is defined by one of the two values:
......
......@@ -3,13 +3,13 @@
This topic demonstrates how to run the Image Classification sample application, which performs
inference using image classification networks such as AlexNet and GoogLeNet.
### How It Works
## How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin. When inference is done, the application creates an
output image and outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
......@@ -62,18 +62,16 @@ For example, to perform inference of an AlexNet model (previously converted to t
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml
```
### Sample Output
## Sample Output
By default the application outputs top-10 inference results.
Add the `-nt` option to the previous command to modify the number of top output results.
For example, to get the top-5 results on GPU, run the following command:
```
python3 classification_sample.py<path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d GPU
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d GPU
```
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)
* [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader)
......@@ -16,7 +16,7 @@ Another required aspect of good throughput is a number of iterations. Only with
The batch mode is an independent attribute on the pipelined mode. Pipelined mode works efficiently with any batch size.
### How It Works
## How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin.
......@@ -26,13 +26,13 @@ Then in a loop it starts inference for the current infer request and switches to
When inference is done, the application outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 classification_sample_async.py -h
python3 classification_sample_async.py -h
```
The command yields the following usage message:
```
......@@ -80,7 +80,7 @@ You can do inference on an image using a trained AlexNet network on FPGA with fa
python3 classification_sample_async.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU -nireq 2 -ni 200
```
### Sample Output
## Sample Output
By default, the application outputs top-10 inference results for each infer request.
It also provides throughput value measured in frames per seconds.
......
......@@ -7,7 +7,7 @@ inference of style transfer models.
## How It Works
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Specify Input Shapes** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
......
......@@ -74,7 +74,7 @@ public:
ConcatLayer& setAxis(size_t axis);
private:
size_t axis;
size_t axis = 1;
};
} // namespace Builder
......
......@@ -98,7 +98,7 @@ public:
EltwiseLayer& setScales(const std::vector<float>& scales);
private:
EltwiseType type;
EltwiseType type = SUM;
};
} // namespace Builder
......
// Copyright (C) 2019 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
......
......@@ -161,8 +161,8 @@ public:
PoolingLayer& setExcludePad(bool exclude);
private:
PoolingType type;
RoundingType roundingType;
PoolingType type = MAX;
RoundingType roundingType = CEIL;
};
} // namespace Builder
......
......@@ -44,6 +44,9 @@ public:
const Version *GetVersion() {
const Version *versionInfo = nullptr;
actual->GetVersion(versionInfo);
if (versionInfo == nullptr) {
THROW_IE_EXCEPTION << "Unknown device is used";
}
return versionInfo;
}
......
......@@ -23,8 +23,9 @@ namespace details {
template<class NT, class LT>
class INetworkIterator: public std::iterator<std::input_iterator_tag, std::shared_ptr<LT>> {
public:
explicit INetworkIterator(NT * network, bool toEnd = false): network(network), currentIdx(0) {
if (!network || toEnd)
explicit INetworkIterator(NT * network, bool toEnd): network(network), currentIdx(0) {}
explicit INetworkIterator(NT * network): network(network), currentIdx(0) {
if (!network)
return;
const auto& inputs = network->getInputs();
......
......@@ -30,7 +30,7 @@ class SharedObjectLoader {
private:
HMODULE shared_object;
public:
public:
/**
* @brief Loads a library with the name specified. The library is loaded according to the
* WinAPI LoadLibrary rules
......@@ -38,6 +38,20 @@ private:
*/
explicit SharedObjectLoader(LPCTSTR pluginName) {
char cwd[1024];
// Exclude current directory from DLL search path process wise.
// If application specific path was configured before then
// current directory is alread excluded.
// GetDLLDirectory does not distinguish if aplication specific
// path was set to "" or NULL so reset it to "" to keep
// aplication safe.
if (GetDllDirectory(0, NULL) <= 1) {
SetDllDirectory(
#if defined UNICODE
L"");
#else
"");
#endif
}
shared_object = LoadLibrary(pluginName);
if (!shared_object) {
THROW_IE_EXCEPTION << "Cannot load library '"
......
......@@ -82,7 +82,7 @@ public:
* @brief Constructor. Creates an empty Blob object with the specified precision.
* @param tensorDesc Defines the layout and dims of the blob
*/
explicit Blob(TensorDesc tensorDesc): tensorDesc(tensorDesc) {}
explicit Blob(const TensorDesc &tensorDesc): tensorDesc(tensorDesc) {}
/**
* @deprecated Please use TensorDesc for Blob initialization
......@@ -126,17 +126,21 @@ public:
* @return Total number of elements (a product of all the dimensions)
*/
size_t Resize(const SizeVector &dims, Layout layout = Layout::ANY) noexcept {
bool bret = deallocate();
if (layout != Layout::ANY) {
tensorDesc = TensorDesc(tensorDesc.getPrecision(), SizeVector(dims.rbegin(), dims.rend()), layout);
} else {
tensorDesc.setDims(SizeVector(dims.rbegin(), dims.rend()));
}
if (!bret) {
allocate();
try {
bool bret = deallocate();
if (layout != Layout::ANY) {
tensorDesc = TensorDesc(tensorDesc.getPrecision(), SizeVector(dims.rbegin(), dims.rend()), layout);
} else {
tensorDesc.setDims(SizeVector(dims.rbegin(), dims.rend()));
}
if (!bret) {
allocate();
}
return product(tensorDesc.getDims());
} catch (...) {
return 0;
}
return product(tensorDesc.getDims());
}
/**
......@@ -147,16 +151,20 @@ public:
* @return The total number of elements (a product of all the dims)
*/
size_t Reshape(const SizeVector &dims, Layout layout = Layout::ANY) noexcept {
if (product(tensorDesc.getDims()) != product(dims)) {
try {
if (product(tensorDesc.getDims()) != product(dims)) {
return 0;
}
if (layout != Layout::ANY) {
tensorDesc = TensorDesc(tensorDesc.getPrecision(), SizeVector(dims.rbegin(), dims.rend()), layout);
} else {
tensorDesc.setDims(SizeVector(dims.rbegin(), dims.rend()));
}
return product(tensorDesc.getDims());
} catch (...) {
return 0;
}
if (layout != Layout::ANY) {
tensorDesc = TensorDesc(tensorDesc.getPrecision(), SizeVector(dims.rbegin(), dims.rend()), layout);
} else {
tensorDesc.setDims(SizeVector(dims.rbegin(), dims.rend()));
}
return product(tensorDesc.getDims());
}
/**
......
......@@ -27,10 +27,8 @@ enum class TargetDevice : uint8_t {
eGPU = 3,
eFPGA = 4,
eMYRIAD = 5,
eHDDL = 6,
eGNA = 7,
eHETERO = 8,
eKMB = 9,
};
/**
......@@ -52,10 +50,8 @@ class TargetDeviceInfo {
DECL_DEVICE(GPU),
DECL_DEVICE(FPGA),
DECL_DEVICE(MYRIAD),
DECL_DEVICE(HDDL),
DECL_DEVICE(GNA),
DECL_DEVICE(HETERO),
DECL_DEVICE(KMB)
};
#undef DECLARE
return g_allDeviceInfos;
......@@ -68,11 +64,9 @@ class TargetDeviceInfo {
{ "GPU", InferenceEngine::TargetDevice::eGPU },
{ "FPGA", InferenceEngine::TargetDevice::eFPGA },
{ "MYRIAD", InferenceEngine::TargetDevice::eMYRIAD },
{ "HDDL", InferenceEngine::TargetDevice::eHDDL },
{ "GNA", InferenceEngine::TargetDevice::eGNA },
{ "BALANCED", InferenceEngine::TargetDevice::eBalanced },
{ "HETERO", InferenceEngine::TargetDevice::eHETERO },
{ "KMB", InferenceEngine::TargetDevice::eKMB }
};
auto val = deviceFromNameMap.find(deviceName);
return val != deviceFromNameMap.end() ? val->second : InferenceEngine::TargetDevice::eDefault;
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment