Developer Guide for Seervision's Audio Integration
Developer Guide for Seervision's Audio Integration
How to start developing for our Audio Tracking Node-RED flows
This guide aims to get you started in connecting to and developing for Seervision’s Audio Tracking. Typically, a customer might want to switch between camera feeds once our automation frames an active speaker, and we provide you with these “framing triggers” via our Node-RED flows, so that you can write the feed switching logic.
We’ll divide this guide into two parts. The first part is for those that are entirely new to Node-RED. We’ll provide you with some pointers and resources to get acquainted with the general Node-RED ecosystem and write some very simple flows, just to get a feeling for it. If you’ve already worked with Node-RED in the past and know your way around, you can safely skip this.
The second part will focus very concretely on your integration with Seervision, the general architecture, what you can expect, and some general guidelines from our side on what your integration will most probably look like.
Getting started with Node-RED
To get started with Node-RED, our first recommendation would be that you install it on your PC so you can familiarise yourself with it and have an experimentation environment. The final work and integration with us will happen on the Node-RED instance on our Seervision servers, but if you’re new to Node-RED, it will be good to have your own experimental install where you can play around and test things without fear of breaking our integration.
The official Node-RED website lists all possible ways (Windows, macOS, Linux, Docker), so we will let their documentation guide you through the installation on your personal machine. Here is the link for their local install instructions: Running Node-RED Locally.
Hello World
Now that you’ve installed Node-RED on your machine, you’ll likely want to play around with it. Two resources we like a lot (and are somewhat similar) for your first Hello World project are:
- Creating your first flow – from the official Node-RED documentation
- Getting to Hello World – from CISCO, a bit more thorough walk-through which covers the relevant basics
Connecting to Seervision's Node-RED Instance
At this point, you should have some idea of what Node-RED is and what it does, so it’s time to take a look at our Audio Tracking flow and start writing automation for it.
What is expected of my integration with Node-RED?
In general, the idea is that you write a “bridge” logic in Node-RED, that exports our microphone automation triggers to the platform of your choice. For example, let’s imagine that the room that is being automated runs on a Q-Sys Core. The Q-Sys core will be responsible for switching which camera feed is live, turning the system on/off and a couple of other things.
Your Node-RED “bridge” logic will be responsible for consuming our triggers and sending them to the Q-Sys Core in the way that suits you best. Node-RED provides multiple ways to do this (UDP/TCP sockets, HTTP calls and more), up to you to decide what works best for the current situation.
You could even do all of this in Node-RED itself if you wanted to, our experience just shows that often, the camera feed switching happens in another software ecosystem.
Why do I have to write anything in Node-RED at all? Can't you just send me triggers over a TCP/UDP/WebSocket call?
Good question! We did this initially, but our experience has shown that such an approach requires extensive, detailed synchronisation between everyone involved, which slows down the progress considerably. When you control both sides of the communication, you will be able to iterate and deploy much faster, without having to reach out to us each time you wish to make a change/add/remove functionality.
Where can I find Seervision's Node-RED flows?
Each of our Seervision Servers comes with Node-RED pre-installed. To access the Node-RED instance containing our automation flows:
- Make sure our system is launched. If you’re not sure how to do that, read our Getting Started guide first.
- You can now access the Node-RED instance on port 1880 using your web browser. For example, if the LAN IP of the Seervision Server is 192.168.1.5, you can access the Node-RED instance by navigating to http://192.168.1.5:1880/
Where can I find the triggers that I need to process?
These flows need to be configured to actually connect them to the relevant microphone panels and Seervision Suite instances. You can follow our Audio Tracking guide in order to connect and configure everything correctly.
If everything went well, you should now be looking at our Node-RED flow. Specifically, you should see one or more large nodes called Compute movement for [...]
. These are the nodes that you will want to connect to.
Each Compute
node has 3 exit nodes:
- The first one,
Movement Started
, means that Seervision has started to move this particular camera. You will want to use this trigger to make sure to switch away from this cameras feed (if it is live), so that the movement is not seen live on the feed. - The second one,
Movement Finished
, means that Seervision has stopped moving the camera, and it is safe to switch its feed to the live feed again. - The third one,
Active Speaker Framed
, is the node you probably want to be listening to 99% of the time (the other two only have niche use-cases). This trigger means that this camera is currently framing somebody who is talking, regardless of whether we moved the camera or not (which is what the above nodes are for). So any time we fire a trigger on this exit node, it probably means you will want to switch to its feed.
To get started with these triggers, a typical approach would be to:
- Drag a
function
node from the left-hand side of the Node-RED interface onto the canvas - Connect the 3rd exit node of the
Compute
node to the input of your newfunction
node - Double-click the new
function
node to start writing your JavaScript to parse incoming triggers (part of themsg
object), manipulate them, and eventually send them out of yourfunction
node usingreturn msg
.
Can I expect Seervision to support me in this?
Absolutely! While we can’t write the code for you, we would be more than happy to sit together with you, guide you through the process, provide tips and tricks from our experience, and much more. Definitely don’t hesitate to get in touch, helping you is literally why we are here!
Shure MXA920 – Best Practices Guide
Shure MXA920 - Best Practices Guide
A detailed overview of what to consider when designing a room with Seervision and Shure's MXA920 ceiling array microphones.
Check out our guide covering: Room Design Considerations, Components Needed, Diagrams, and FAQs.
Seervision General Tech Specs
Seervision General Tech Specs
A brief sheet showing some of the tech specs of the Seervision Suite
We often get asked about the technical details of our system as well as what goes into a Seervision server.
Here’s a system overview sheet that aims to answer some of the most common questions:
Seervision Robotic API
Seervision Robotic API
This is the manual that details the Seervision Robotic API
Seervision has two APIs surrounding its workflow. This page is dedicated to the Robotic API, meant for directly controlling the Pauli Robotic Head (so there is no Seervision Server involved).
If you are looking to control the Seervision Suite over API, you will want to look at the Seervision Production API page.
If you’ve been using our Pauli head and want to develop your own panel to control it directly, you’ll want to use our Robotic API. You can find the link to its latest documentation here:
Seervision Production API
Seervision Production API
Control the Seervision Suite via your own custom integration
Seervision has two APIs surrounding its workflow. This page is dedicated to the Production API, meant for controlling everything that happens in the Seervision Suite (toggling tracking, lens control, creating containers etc).
If you are looking to control the Seervision Pauli Robotic Head directly over API (without a Seervision Server involved), you will want to look at the Seervision Robotic API page.
We often get the question whether it is possible to control our system via some sort of API, often to integrate with some panel, sometimes for custom panels that are built in-house by our users. The answer’s of course yes, that’s entirely possible, using the Seervision Production API.
Our API is essentially a WebSocket endpoint that consumes JSON. You can find all the documentation for interacting with the Seervision Production API over at api.seervision.com including examples in JavaScript and Python.
Of course, we’re looking to expand that API and add in features that our users need – if something’s missing for you, don’t hesitate to get in touch and we’ll see if we can squeeze it in!
Seervision Manual
Seervision Manual
Where to find the manual for the Seervision Suite
The latest version of the SV Suite manual can always be found at manual.seervision.com
This webpage automatically gets updated whenever we make changes to our software and corresponding documentation.
We’ve got multiple checks in place to make sure that everything’s up and running smoothly, but should it not be up to your expectations, don’t hesitate to shoot us a message!
Camera automation based on microphone location data
Camera automation based on microphone location data
A high-level overview on the required components for camera automation based on speaker location-providing microphones
The goal of this article is to provide a high-level overview of the possibilities for automation based on microphones that provide positional data for the active speaker.
As each automation setup is unique in its own way, we don’t include example code here – writing this code is entirely dependent on your intended goals.
The examples below are not necessarily limited to Shure’s microphone panels. In reality, any microphone system with an API that provides some kind of speaker location can be used.
Shure is used in these examples because it is the hardware that we have worked most commonly with, and seems to be the most accurate with the location data it provides.
Basic Setup
Before kicking off, you will have to decide on a central service where you will write all your logic that handles connecting to APIs, parsing information, and sending control commands to all relevant devices (the “brain” of the automation). At Seervision, we mostly use Node-RED for this, and all of our Seervision servers by default offer a Node-RED instance. If you have a Seervision server, you can access this Node-RED interface on the IP of the Seervision server, port 1880 (as an example, on the LAN at our office, it would be http://10.20.4.23:1880).
Next, you should configure your microphone array correctly. In the case of the Shure MXA920, this will include configuring it via its web interface (e.g. microphone height, speaker height), but this varies between hardware. It’s best to contact your microphone manufacturer’s representative to make sure you get the configuration right.
Once your hardware setup is complete, your first step should be to write the logic to access the microphone and start receiving its data. For Shure’s MXA920, the documentation is available here.
Scenarios
The last step in the basic setup is deciding for yourself what you want your automation to look like, i.e. coming up with a couple of automation scenarios. Try to write a couple of bullet points in the form of If This Then That. For example: If Lobe 1 on my microphone activates, then switch to Input 1 in my vision mixer. Having this clearly in your head will simplify converting this to code later on.
Automated Camera Switching
If you wish to automatically switch the active camera based on microphone input, you will need to find a way to interact with your vision mixer, which usually offers an API as well. At Seervision, we use VMix (their API documentation is available here), but most vision mixers have some kind of API (we’ve also done it with Blackmagic ATEM minis for example).
Once you have set up the communication in your automation “brain” to the vision switcher, it is a simple matter of writing your logic by leveraging the data from the APIs. For example: if lobe 1 on the MXA920 activates (Shure API), switch to Input 1 on VMix (VMix API).
Microphone Speaker Tracking with Seervision
This is the most advanced use-case, and Seervision will have to work together with you in order to get this set up. In order for this to be set up, you must already have:
- An active Node-RED instance that is connected to your microphone and is receiving data on the current active speaker
- Written logic that tells us to what pan/tilt/zoom we should send the PTZ
Once both of these are setup, we will provide you with the relevant interface that consumes your pan/tilt/zoom inputs, and sends them to the Seervision Suite in order to be executed as a movement to track the speaker.
Seervision Integrations
Seervision Integrations
This page lists all of the PTU integrations for the Seervision Suite
If you are using Seervision after July 2023 as a part of Q-SYS, please note that the list of compatible cameras are the NC12x80 and NC20x60, with the NC-110 being supported as a conductor camera.
For more information, visit support.qsys.com, or contact your closest Q-SYS rep for the latest information.
This page lists all pan/tilt units currently compatible with the Seervision Suite. We also have a dedicated section for PTZ cameras that are confirmed to be compatible with audio tracking.
PTZ Cameras Suitable for Audio Tracking
The requirements for audio tracking are more stringent – only certain models of PTZ cameras are performant enough to deliver the required speed and accuracy for a good remote experience. Below, we’ve listed all PTZ cameras with which we’ve tested audio tracking, and of which we know they deliver satisfactory performance. If the PTZ camera you have in mind is not on this list, it may mean we either have not tested it, or it’s not good enough. Feel free to check in with us to confirm.
PTZ Cameras With Confirmed Performance:
- Canon CR-N300
- Canon CR-N500
- Panasonic AW-UE160
- Panasonic AW-UE150
- Panasonic AW-UE100
- Q-SYS NC-12×80 (NC-series only, PTZ-series are currently not supported)
- Q-SYS NC-20×60 (NC-series only, PTZ-series are currently not supported)
PTZ Cameras Likely to Work (unconfirmed, verify with Seervision):
- Panasonic AW-UE80
- Panasonic AW-HE70
- Panasonic AW-UE50
- Panasonic AW-HE40
- Sony BRC X1000
- Sony BRC H800
- Sony BRC X400
PTZ Cameras (Standard Visual Tracking)
These neat little devices are fantastic for small-scale productions like a studio, where you still need a beautiful, punchy and high-definition image, but don’t need the lens/camera versatility of a full-blown production company.
Panasonic PTZs
- Panasonic AW-UE150 (recommended)
- Panasonic AW-UE100 (recommended)
- AW-UE80 (FreeD)
- AW-UE70
- AW-UE50
- AW-HE130
- AW-HE65
- AW-HE58
- AW-HE40
- AW-HE48
- AW-HE35
- AW-HE60S
Note: Newer Panasonic models perform better, especially those with FreeD. If a model you’re looking for isn’t listed, it doesn’t necessarily mean we don’t support it, rather, we haven’t tested it nor developed a driver for it.
VISCA Over IP PTZs
VISCA has seen wide-spread adoption across the industry in many devices, and as of 2022, we rolled out support for VISCA over IP to respond to that popularity. Due to the wide support of VISCA, we won’t be listing all compatible devices on this page. Generally, if the device supports VISCA over IP, we will be able to steer it (at varying levels of smoothness and reactiveness).
Note: Due to implementation particularities, it is likely that we will need to tune the VISCA control specifically for each type of device to get the best possible performance out. In other words, if you are planning to control a VISCA over IP device that we have not seen before, we will need some time to test that device and make sure we can tune it for best performance.
Control over VISCA Over IP
Flagship:
- Canon CR-N500 (FreeD)
- Canon CR-N300 (FreeD)
- Sony BRC-X1000 (FreeD)
- Sony BRC H800 (FreeD)
- Sony BRC X400 (FreeD)
- Sony SRG X400
Others:
- Sony SRG X120
- Q-SYS NC-20×60 (NC-series only, PTZ-series are currently not supported)
- Q-SYS NC-12×80 (NC-series only, PTZ-series are currently not supported)
Note: Always ask ahead of time about which level of performance you can expect from your selected PTZ.
Robotic Heads
Of the available robotic heads, we currently only explicitly support the Pauli Robotic Head. If you have a particular robotic head in mind, don’t hesitate to reach out to us and we can work with you through the details of the implementation.
Lenses
We support a variety of lenses, some of which need external lens motors to be actuated. If you’d like to get more details, just check in with us and we’re happy to walk you through it!
What about LANC or other communication protocols?
As it stands, we don’t offer support for any additional protocols. We’ve tested most of these at some point, and we found that the respective implementations did not allow for the kind of high-frequency communication that we need in order to guarantee smooth tracking performance.
Of course, as these protocols mature, there is no doubt that they will eventually achieve the performance that we require. If you think we’re missing out on a particular protocol, don’t hesitate to get in touch to let us know!