- 发布
- 上海工智达电气技术有限公司
- 价格
- ¥1325.00/台
- 西门子
- S200 200V低惯型量电机
- S200
- FL2102-2AG00-1MC0
- 中国
- 带健槽不带抱闸
- 电话
- 15221760199
- 手机
- 15221760199
- 微信
- 15221760199
- 发布时间
- 2024-10-26 15:54:52
AI Inference Server standardizes AI model execution on Siemens Industrial Edge. It facilitates data collection/ac, orchestrates data traffic, and is compatible with all powerful AI frameworks.
More information is available at this link.
Ordering optionThe app can be ordered from the Industrial Edge Marketplace at this link.
AI Inference Server is a Siemens Industrial Edge application that can run on Siemens Industrial Edge devices.
AI Inference Server enables AI models to be executed using the built-in Python Interpreter for the inference purposes.
The application guides the user to set up execution of the AI model on the Siemens Industrial Edge platform using the ready-to-use data connectors.
AI Inference Server standardizes logging, monitoring and debugging of AI models
AI Inference Server is designed to integrate MLOps with the AI Model Monitor.
AI Inference Server with GPU acceleration:
AI Inference Server in the variant with GPU acceleration standardizes the execution of the AI model on GPU-accelerated hardware using AI-enabled inference in the Edge ecosystem.
AI Inference Server
Supports the most popular AI frameworks that are compatible with Python
Orchestrates and controls AI model execution
Can run AI pipelines with both an older and a newer version of Python
Enables horizontal scaling of the AI pipelines for optimum performance
Simplifies tasks such as input mapping (thanks to integration with Databus and other Siemens Industrial Edge connectors), data collection/ac, and pipeline visualization
Permits monitoring and debugging of AI models based on inference statistics
Features logging and image visualization
Includes pipeline version management
Permits the import of models via the user interface or via a remote connection
Supports persistent data storage on the local device for each pipeline
AI Inference Server variant for 3 pipelines
Supports the simultaneous execution of up to 3 AI pipelines