1. Machine Learning and Deep Learning applications including signal and image processing to implement pattern recognition and associated metrologies
The team has built several traditional and cutting edge systems utilizing supervised and unsupervised learning. The applications of these technologies include object detection and metrology within images, pattern recognition and metrology within signal processing applications, and combinatorial sensor fusion. The team has successfully deployed continuous applications involving manufacturing assembly verification, weld torch signature monitoring, bio-signal monitoring for life sciences, and fully automated product recognition within still images and video.
2. Database design, interfaces, and coding – SQL database design (schemas, stored procedures, and web services / interfaces) The primary experience of the team with respect to database design is the use of both commercial and open source database engines in order to create a data repository for transactions associated with the Saas, client server, and other analytical systems. The team has deep experience with both referentially intensive designs and simple logging and event triggering designs. The database has generally then been associated with front-end clients, including GUI applications, as a data source using ORM, ODBC, and web service interfaces to provide a smooth, seamless client experience. The team has experience with Oracle, PostgreSQL, SQL Server, and MySQL database systems as well as discrete data stores supported by frameworks like POCO.
3. Saas – Software as a Service applications with backend servers The most recent experience with development and deployment of SaaS systems centered around migration of an enterprise client-server system including a thick, Qt-based GUI client into a proof of concept Saas system that was entirely browser based. Previous experience included a system deployed at a semiconductor facility’s mask supplier allowing the semiconductor customer to monitor, audit, and perform offline simulations of the lithographic mask to be supplied to the customer. This allowed a dramatic increase in the quality and reduction in the time-to-delivery of the photomasks to the customer with high confidence in their performance and accuracy. The system provided a complete lithographic simulation system that allowed the customer to see exactly what the effects of mask defects would be on the finished integrated circuit product and allow a joint signoff system for its delivery.
4. Media services – including streaming media (video, audio) Streaming media experience within the team involved developing both back-end servers and front-end GUI clients for DRM-based audio applications. The application included automated backups, feature based licensing, and multi-supplier support for OEMs. The use of RTSP (real-time streaming protocol) was utilized heavily to provide a continuous stream of media packets while avoiding buffering and other issues associated with high fidelity and quality network streaming.
5. Analytical systems – simulations, mathematical transformations, back-end compute services, GPUs Multiple members of the team have strong experience in the area of analytical data systems including very high node count simulations, mathematical convergence problems such as integral operations involved in circuit analysis, and continuous filtering systems such as FFTs, DFTs, signal processing, image transformations, and common filters implemented in highly optimized digital form. The team also has experience in incremental design goals of proving out filter chains and then migrating them to optimized GPU arrays using CUDA, OpenCL, and other frameworks.
6. Client-Server systems including visualization front-end clients, REST APIs, and object remoting The most recent experience of the team has been centered on client-server enterprise systems providing consistent computational support independent of the data sources or the visualization targets used for GUI interfaces. The use of third party frameworks to support these clients include the popular POCO C++ environment allowing the GUI developers to work independently from the back-end compute team. This provides for increased performance and quality of the finished system by specifying and testing against interface APIs developed as REST and proprietary object remoting interfaces.
7. Distributed computing, multi-process, multi-threading for performance optimization The general methodology of the team in designing a robust, high quality, high performance solution is to carefully craft the design to allow its migration to multi-process and multi-threaded coding while focusing initially on the most accurate “quality of results.” By focusing initially on the accuracy portion of the coding exercise, the team has been able to always ensure accurate, testable, and verifiable test results for the following performance and optimization phases. By designing in this plan from the beginning, the team is guided by a data-driven process that ensures quality is accomplished first and continues to be testable during the entire development process. Most of the distributed computing experience of the team has been in the areas of multi-system partitioning of systems that are now scaled as part of their development and involve considerations like – integrating new systems to legacy, re-factoring outdated systems, etc. Most customers that have been around for any length of time have legacy systems that continue to provide a high quality of service but can no longer be integrated, improved, or supported by existing employees or vendors. The migration and integration of these systems to new platforms, frameworks, and work flows is a strong suit of the team. One of the primary issues involved with tasks like this involves bringing system documentation up to date prior to re-factoring and integration.
The team can provide the following services in part or full for the topical areas listed above:
• Software Development (Deliver production quality full functional systems)
• Technical design (software design specifications, APIs, public interfaces, all controlled interfaces and algorithms)
• Project management (customer interfaces, schedules, status meetings and reports, estimates, billing)
• Technical documentation (proposals, architectures, specifications, end user documentation,
• QA/test protocols and results)
• Infrastructure (source code/revision control for all controlled documents, issue tracking and resolution reports, continuous integration for automated builds of code into packaging)
• Proofs of concepts (storyboards, quick prototypes to show critical elements, risk reduction on key areas of the project)
2. Database design, interfaces, and coding – SQL database design (schemas, stored procedures, and web services / interfaces) The primary experience of the team with respect to database design is the use of both commercial and open source database engines in order to create a data repository for transactions associated with the Saas, client server, and other analytical systems. The team has deep experience with both referentially intensive designs and simple logging and event triggering designs. The database has generally then been associated with front-end clients, including GUI applications, as a data source using ORM, ODBC, and web service interfaces to provide a smooth, seamless client experience. The team has experience with Oracle, PostgreSQL, SQL Server, and MySQL database systems as well as discrete data stores supported by frameworks like POCO.
3. Saas – Software as a Service applications with backend servers The most recent experience with development and deployment of SaaS systems centered around migration of an enterprise client-server system including a thick, Qt-based GUI client into a proof of concept Saas system that was entirely browser based. Previous experience included a system deployed at a semiconductor facility’s mask supplier allowing the semiconductor customer to monitor, audit, and perform offline simulations of the lithographic mask to be supplied to the customer. This allowed a dramatic increase in the quality and reduction in the time-to-delivery of the photomasks to the customer with high confidence in their performance and accuracy. The system provided a complete lithographic simulation system that allowed the customer to see exactly what the effects of mask defects would be on the finished integrated circuit product and allow a joint signoff system for its delivery.
4. Media services – including streaming media (video, audio) Streaming media experience within the team involved developing both back-end servers and front-end GUI clients for DRM-based audio applications. The application included automated backups, feature based licensing, and multi-supplier support for OEMs. The use of RTSP (real-time streaming protocol) was utilized heavily to provide a continuous stream of media packets while avoiding buffering and other issues associated with high fidelity and quality network streaming.
5. Analytical systems – simulations, mathematical transformations, back-end compute services, GPUs Multiple members of the team have strong experience in the area of analytical data systems including very high node count simulations, mathematical convergence problems such as integral operations involved in circuit analysis, and continuous filtering systems such as FFTs, DFTs, signal processing, image transformations, and common filters implemented in highly optimized digital form. The team also has experience in incremental design goals of proving out filter chains and then migrating them to optimized GPU arrays using CUDA, OpenCL, and other frameworks.
6. Client-Server systems including visualization front-end clients, REST APIs, and object remoting The most recent experience of the team has been centered on client-server enterprise systems providing consistent computational support independent of the data sources or the visualization targets used for GUI interfaces. The use of third party frameworks to support these clients include the popular POCO C++ environment allowing the GUI developers to work independently from the back-end compute team. This provides for increased performance and quality of the finished system by specifying and testing against interface APIs developed as REST and proprietary object remoting interfaces.
7. Distributed computing, multi-process, multi-threading for performance optimization The general methodology of the team in designing a robust, high quality, high performance solution is to carefully craft the design to allow its migration to multi-process and multi-threaded coding while focusing initially on the most accurate “quality of results.” By focusing initially on the accuracy portion of the coding exercise, the team has been able to always ensure accurate, testable, and verifiable test results for the following performance and optimization phases. By designing in this plan from the beginning, the team is guided by a data-driven process that ensures quality is accomplished first and continues to be testable during the entire development process. Most of the distributed computing experience of the team has been in the areas of multi-system partitioning of systems that are now scaled as part of their development and involve considerations like – integrating new systems to legacy, re-factoring outdated systems, etc. Most customers that have been around for any length of time have legacy systems that continue to provide a high quality of service but can no longer be integrated, improved, or supported by existing employees or vendors. The migration and integration of these systems to new platforms, frameworks, and work flows is a strong suit of the team. One of the primary issues involved with tasks like this involves bringing system documentation up to date prior to re-factoring and integration.
The team can provide the following services in part or full for the topical areas listed above:
• Software Development (Deliver production quality full functional systems)
• Technical design (software design specifications, APIs, public interfaces, all controlled interfaces and algorithms)
• Project management (customer interfaces, schedules, status meetings and reports, estimates, billing)
• Technical documentation (proposals, architectures, specifications, end user documentation,
• QA/test protocols and results)
• Infrastructure (source code/revision control for all controlled documents, issue tracking and resolution reports, continuous integration for automated builds of code into packaging)
• Proofs of concepts (storyboards, quick prototypes to show critical elements, risk reduction on key areas of the project)