Helping You Connect the Dots to Succeed Faster
WGAN-TV: Now Playing
WGAN-TV: Now Playing
Free WGAN Map
Locations of Matterport Pro3 Camera Service Providers and see the number of Matterport Pro3s and/or BLK360s for each Matterport Pro.
View WGAN Map
Contact Info
Locations of Matterport Pro3 Camera Service Providers and see name, company, website, email and mobile phone for each Matterport Pro.
Join WGAN Sponsor
Get on the Map | A Service of We Get Around Network (not affiliated with Matterport)
One Order  |  One Quote  |  One Contact
Book Multiple GLOBAL Commercial Locations
  • ✔  As-Builts
  • ✔  Construction Progress
  • ✔  Facilities Management
Last 24 Hours: 148 Unique Visitors
9,265 WGAN Members in 149 Countries
Last 30 Days: 21,696 Page Views | 11,459 Unique Visitors | 24 New Members
We Get Around Network Forum
Quick Start | WGAN Forum
APIDeveloper ProgramSDKVideo

Matterport API/SDK Webinar: Matterport SDK Sensors and Sources14812

WGAN Forum
Founder &
WGAN-TV Podcast
Host
Atlanta, Georgia
DanSmigrod private msg quote post Address this user
Video: Matterport API & SDK 1: SDK Sensors | Video courtesy of Matterport YouTube Channel | 4 May 2021

From the Matterport YouTube Channel:

In this first episode of [Matterport] API & SDK [Webinar], we talk about the new SDK Sensors breaking down how they work and why you may want to use them with your SDK applications.

Contact developers@matterport.com with any questions about the sensors or other SDK and API-related issues.

Matterport Technical documentation can be found here:

https://matterport.github.io/showcase-sdk/index.html
https://matterport.github.io/showcase-sdk/docs/sdkbundle/reference/current/modules/sensor.html


Source: Matterport YouTube Channel

---

Participating in the Matterport Webinar are:

Amir Frank, Matterport Marketing Content Manager
Raghu Munaswamy, Matterport Product Manager
Guillermo Bruce, Matterport Software Engineer
Dustin Cook, Matterport Staff Software Engineer, SDK
Briana Garcia, Matterport Sales Development Representative
Brian Vosters, Matterport Senior Manufacturing Engineer Manager
Post 1 IP   flag post
WGAN Forum
Founder &
WGAN-TV Podcast
Host
Atlanta, Georgia
DanSmigrod private msg quote post Address this user
Hi All,

Transcript (video above)

Amir Frank (00:02):
Hey, webinar listeners. Thanks for joining us. This is the first, hopefully of many in a series of API and SDK webinars that we'll be doing with pretty much the entire API and SDK team. So we have a couple of people online. We'll just give a couple more minutes, maybe for some others to join. In the meantime, if you're not familiar with the team, this is us. We've got Raghu, is product manager, Guillermo, Dustin, Brianna, and Brian.

Amir Frank (00:45):
We're all here to help answer some questions and give you a little presentation about the sensors. So excited and looking forward to see what you guys think about this and what you have in stores as use cases for it as well.

Amir Frank (01:00):
Let's see, we're about a minute in and we do have some more, okay. We've got a couple more people coming in. So what I wanted to do before we dive in, I just wanted to kind of set the groundwork for our presentation today. We are going to be focused on the SDK sensors. So that's what our presentation is going to be about today. We're not going into general SDK API usage, probably get to that later on and in future episodes. But for now we are very much focused on the SDK sensors.

Amir Frank (01:40):
So the presentation will be maybe 5 to 10 minutes, something like that, and then we'll just open it up for Q&A. We want to give as much time as possible for your questions, and we've got everybody here who can help answer your questions. That being said, we would like to keep the questions focused on technical API, SDK-related and sensor-related questions. Anything else, we can talk offline, just because we are limited with the amount of time that we have today. So with that said, it looks like you've got a good number of people who are in now, and I just want to hand it off to a Raghu.

Raghu Munaswamy (02:21):
Thank you, Amir. Welcome, all. This is the first of the series. As Amir mentioned, this is our forum for you today, just with exclusive partners on developer tools, focused on the SDK sensors as a topic today. Again, the intent is to have this on a very recurring basis. And if you have more topics that you want to hear from us, we'll come back to this. We'll provide you an email where you can write to us on topics, but expect more to come on the series itself.

Raghu Munaswamy (02:56):
Going into the conversation for today and the motivation for SDK sensors, you would recollect in some of the shop talk series that Amir had done last year. One of the questions that has come up to us on a recurring basis is geo-fencing. "Now how do I know as my customers walk through a model, how do I know where they are? What controls can I have?" And in the past, a solution has been using a combination of API SDK. I would find out where, in which sweep, my customer is and try and relay that back to SDK and have a solution, having these two together.

Raghu Munaswamy (03:34):
But now what we have is a new capability and a Guillermo, our dev lead, is going to walk us through that. That's exactly the problem that we are going to tackle with this solution. How do we know the position of the customer of my visitor in my model, within the SDK capability world? So with that background, I'll hand it over to Guillermo. Again, if you have any questions, keep them coming in, we will address them towards the end of the presentation. It could be technical questions, any product questions. We are here to answer. With that I'll hand it over to Guillermo.

Guillermo Bruce (04:12):
Hi. I'm Guillermo, technical lead for the SDK, and I am going to be sharing my screen. Okay. So this is a short description of a new feature that launched with the SDK several weeks back. We call them sensors. They're actually sensors in sources, but it's easier to talk about sensors. The topics that we're going to talk about are, what are they? What are sensors? And why would we use a sensor? We'll look at some examples and then go right into feedback questions.

Guillermo Bruce (04:50):
So, first thing is, what is this, a SDK sensor and a source? So a sensor is a volume that can detect sources contained within a space. As you can see here, we have a camera and we have what's called a frustum. It is a volume that represents what we see in a space. A source is something that is detectable by a sensor. Sources are volumes in the space detected by sensors.

Guillermo Bruce (05:16):
A way to think about this, it kind of takes some getting used to this terminology. A way to think about this is the transmitters and receivers. A sensor is a receiver, as part of your thinking by a radio frequency receiver or something like that, and a source is a transmitter, and it's always sending out a signal. So if you get close to a source, you'll get some data about it. As you get further away, you lose the data.

Guillermo Bruce (05:42):
Talking a little bit about what the one type of sensor that we support today, and it's called the camera frustum sensor. It's a pyramidal-shaped volume used to render the scene, as I stated before. It's the only type that we support, and it captures everything inside of the view. Some useful things about that is whenever the user navigates through a space, the camera frustum moves with the user. Therefore, if the sensor is moving with the camera and we get all this information, we can get all this information about what the user's perceiving of the space.

Guillermo Bruce (06:20):
Sources. Sources are very primitive shapes that we can apply to a space. Oriented boxes, vertical cylinders, and spheres. These shapes can be used to approximate most 3D things that you might insert into a space. Those things can be anything from matter tags to models like GLTFs or OBJs. It could be rooms, it could be floors, it could be a small part of a room. We've tried to provide some of the basic primitives that you can mold to any kind of object that you want to represent into space.

Guillermo Bruce (06:58):
Let's see. Cylinders is a particularly interesting case, because you can approximate sweeps very well, because sweeps are radial, tend to be radial. If we were to have avatars, you can also represent people or avatars inside of a space using cylinders.

Guillermo Bruce (07:17):
Why use SDK sensors? What are the benefits? So first item, sensors implement the common 3D math required to compute intersection between volumes in the space in an efficient way. If some folks have had experience in this kind of work, it can become very complex and very, I would say, compute-intensive. So it takes a lot of time to develop applications that use this kind of functionality. So we are providing this functionality, meaning it will save you time. Another aspect of this is the sensor system is reactive and provides all the intersection results to your application via the concept of objects and observables.

Guillermo Bruce (08:02):
If you've used the SDK previously, we have been moving away from standard pulling mechanisms of calling functions and getting data back and using a concept called observables where we get notified of changes. It helps us be very efficient about things that we compute when they change. So we're actually extending that system now to be not just observables across the I-frame or directly from any of the SDK namespace functions, but we're adding objects that you can interact with in a synchronous way. This means less promises, let's say synchronicity, your code looks cleaner. It's easier to maintain.

Guillermo Bruce (08:45):
Another aspect. Well, why do you want to use sensors? Sensors can enable spatially aware behavior, making your applications more dynamic. Today a lot of the content that you deal with, things like matter tag, sweeps, labels, floors, rooms are very static and they're predefined for you. But with these sensors and sources, you can augment your space with your own things that you can place and react to, to make it more dynamic. And lastly, the sensor system can be used to approximate any 3D object, as I stated before, even ones unique to your applications. So with these primitive shapes, we should be able to receive signals from any kind of thing that you can insert into a space.

Guillermo Bruce (09:30):
Some examples. The first example is... Actually, I [inaudible 00:09:34] my notes. So just to note, the small red box is a debugging tool that we've enabled so that when you were troubleshooting or developing using sensors and sources, you can see where the source is going, where the source is. In this case, what we've done is on the right side, you'll see these boxes, and those represent areas that we've added, box sources in the space. And on the left side, you'll see an application that listens to the user entering that space and then displaying the name of that type of space. So it's contextual information about that space. So this is a way to designate areas in the space. And note that if we were to look at what room or floor, this is likely, I haven't actually checked, but I believe this is one big room. So we've actually subdivided the room into functional areas. There are more real, how you might use this space.

Guillermo Bruce (10:39):
Next one is an interesting one. So this is a way to manage complexity in a space. As you've seen, there's a lot of spaces that have a very high density of matter tags, which can kind of occlude some aspects of the model, right? You see less of the model and more of the matter tags. So this is a concept that you can use to manage complexity in the scene. You can collapse 16 matter tags into one, depending on how far away you are from it. You can call this a level of detail. So this is an example of using spheres as level of detail for matter tags, all doable dynamically using transient matter tags. These don't have to be inserted in workshop or anything like that.

Guillermo Bruce (11:25):
Lastly, sweep control. So this is a sort of interesting case where you may want to disable navigation to some sweeps, depending on whether you're close in proximity to another sweep. So this is a toy example of how you might do that and an ideal case for using the cylinder shape as you approach this area. Note that that cylinder happens to be around this sweep, but there is nothing indicating that must be around this sweep. It's a little bit arbitrary at this point. So as I entered that cylinder, some sweeps show up. And that's it.

Raghu Munaswamy (12:04):
That's great. Thank you, Guillermo. And then that's for the audience to relate to some of these, what are these capabilities allow you to do as... As just a simple scenario, let's say if you have a space that you want to have some action automatically trigger. For instance, the real estate listing, as the visitor enters the living room, you want the television to start playing, because [inaudible 00:12:28] in that particular space. Now you could do that. You could set up a sensor, and as you detect visitors coming in that particular visit, you would have the video start playing because they are in there, versus today you could have a video playing, it's ongoing basis, but this gives that personalization and engagement element of our spaces. That's one.

Raghu Munaswamy (12:50):
Second is, again, building on top of real estate, because that's easy for everybody to relate to. Let's say the same visitor enters kitchen and he looks around, but now he's looking at the refrigerator and you have a very high-end refrigerator that's there. Now just because his view is towards that direction, you can detect and show the price tag for this, or have the matter tag as Guillermo was showing up. You could have the matter tag pop up at that point to say, "No, this is X, Y, Z dollars." So that's the kind of engagement that's easy to be unlocked with SDK sensors. Hopefully that helps you relate to the core capabilities that we unlock with this feature.

Raghu Munaswamy (13:30):
Also, again, if you could put the slide back on the... There were a couple of pieces of information on there. Most of you are familiar with RBL, where you can reach out to at developers@matterport.com. And you could reach out to us at the same DL. And for both questions on this topic, or even any topics that you want to see in this forum, come back in future, that would be the DL to reach. As Guillermo pulls up, you'll be able to see that in a moment, there's also a link to the technical documentation on how SDK sensors can be leveraged. Let's see.

Amir Frank (14:10):
We will have this recorded, and those links will definitely be available in the description of the recording on YouTube as well. So, let's get there.

Raghu Munaswamy (14:20):
Yeah, for sure. And yeah, let us know any questions based on what you've heard. And while you take the time to talk through it, maybe... Again, well, let's talk a little bit about the motivation for building this. I alluded to this at the beginning of the call. To ask for spacial awareness has been a consistent to ask, and now the previous loaded-up API SDK and being able to relate or tie back the sweeps to the location was lot more complicated. With SDK sensors, how easy does it become, Guillermo? Anything that you want to share on how [inaudible 00:14:59] easy the development becomes with this capability?

Guillermo Bruce (15:02):
Yes. Whereas before we had to tie content directly to a sweep, so you'd have to know the IDs, and the IDs would have to match to a location in a space, so you'd have to identify those, track those, and keep those in your own systems, with a source, a volume source, the volume source itself becomes the thing that you create that becomes the trigger for your behavior. Whereas previously, you might've been entering a sweep or not entering or leaving the sweep. So the volume makes it independent of the sweep. As long as your volume covers the entire sweep, then you don't care what the sweep ID is, and you don't have to track that anymore.

Raghu Munaswamy (15:43):
Yeah. Easy handling and simpler implementation and obviously easier maintenance as well. Sure. And we have a question that came in, that's from Steven, [inaudible 00:15:56]. Maybe, Guillermo, you could take this. "Does the SDK sensor work with VR mode?"

Guillermo Bruce (16:02):
Yes, it's a 3D construct, which is part of our general 3D framework. So they would work with it.

Raghu Munaswamy (16:12):
Folks, any other questions, keep them coming, and [inaudible 00:16:15] panel here to answer. And anything else in terms of... So this is a feature that's supposed to be centrally launched, and we've had some working demos internally. And if you have any questions on how to implement these, Brian, who's been our technical support, your point of contact. Most of you have worked very close with Brian. He could be our point person to take the questions and work to the solutions as well.

Raghu Munaswamy (16:49):
So any other questions, team, let them come our way. The next question, again, "Could the sensors be used to try and [inaudible 00:16:59] to provide heat map reports?" Guillermo, is that something that's possible?

Guillermo Bruce (17:07):
I assume that means you want to know if people or users are looking at specific parts or areas in the space. If that is true, then you could mark up your space with a few sensors where [inaudible 00:17:24] points of interest. And when they are in view, you would get that data through the callbacks in the sensor system. The sensor system will tell you if something is in view or not. In addition, it will tell you if it's within range, also, if you're in proximity.

Raghu Munaswamy (17:40):
So using those even data, they can definitely build the heat maps. Great. Cool. The next one, "What other sensors are you thinking of implementing in the future?" Yes. Guillermo.

Guillermo Bruce (17:58):
I can go through a few. Currently, we're focused on camera-based sensors. So one concept that we're thinking about is a cylinder-based. Instead of a frustum, you would have a tube going along your camera, which allows you to pinpoint a small area in front of you. That would be one possible thing that we've considered. Other things would be detaching the sensor from the camera and having an arbitrary volume traveling the space. That's where we're thinking about going.

Raghu Munaswamy (18:33):
Absolutely. And also we are open to hearing from you as well, assess our top users of high SDK BI capabilities. One of the goals for today's conversation was also to introduce you to this. And we want to hear from you what other use cases do you think we can unlock on top of these space capabilities, keep them coming our way. Again, developers@matterport.com is how you could reach out to us. And let us know what else you want to see.

Raghu Munaswamy (19:00):
Any other questions, keep coming. And if you want to take a moment at this point in time for me to recap again what has been spoken so far, this SDK sensor has been our capability that's been asked for us for a long time, especially around being spatially aware and how we can better engage with customers. This is a first step in that direction, and you could keep engaging with us to support Galloper, matterport.com, or [inaudible 00:19:30] share our feedback. And yeah, give it down to the final minutes. If you have any questions, please send them over.

Amir Frank (19:41):
Question from Alberto just came in. "Does the sensor's functionality be used on SDK or embeds version?"

Guillermo Bruce (19:50):
Both, so it can be used by the embed version and for the bundled version.

Amir Frank (19:54):
Very good. I know we have a relatively small audience today. We kept it to really just the power players with SDK. But by all means, ask away. The questions panel is down at the bottom. You can just tap on that and ask anything you want at this point.

Raghu Munaswamy (20:20):
And this could be the time, since we have the time, attention from us. There are other topics that you want to listen from us, yes, obviously you can email us, but put it in the forum on all the topics you want to see here on this forum for future sessions as well.

Amir Frank (20:37):
Yeah. What topics related to API and SDK are most interesting to you would be great feedback that we are developing the series. So getting that kind of information from you and what interests you would be very helpful.

Raghu Munaswamy (20:49):
We have another question that came in. "Is it possible to customize the message shown by the sensor area?" So I presume, I think the question here is more about, can I have two different sensory areas and each one have a different message shown up behind those?

Guillermo Bruce (21:07):
Yes. So what you saw in the video was an example that we will be providing with the SDK examples we built. That particular example simply displays the text in the front, but it can actually do anything you want. So yes, it is possible to customize a message. If you were to take that example and take the part that you need for your application, you can do as you have the callback for that.

Raghu Munaswamy (21:35):
I don't think we have any more questions coming in. Oh, there's one more coming.

Amir Frank (21:40):
There's another one from Stephen.

Raghu Munaswamy (21:40):
There's one more from Stephen. "How does the sensor's SDK work with different [inaudible 00:21:49] plans, et cetera?

Guillermo Bruce (21:52):
Yeah. It's independent of the modes. So that may be a source of additional consideration when building your application as to whether you want behaviors that occur on one mode versus another, if you're using the showcase as the baseline, but it's independent. So if you were to have a box that shows the title and it works in pano mode or inside mode, it would also happen to work in dollhouse. So as you kind of move around, you'll see the same behavior. It's consistent in the [inaudible 00:22:27].

Amir Frank (22:27):
Does that means that when you go to the dollhouse view, you're actually positioning yourself outside of the sensor?

Guillermo Bruce (22:35):
No, the sensor goes with you, your camera actually moves above the model.

Amir Frank (22:39):
I'm sorry. Okay. I was thinking those spheres that you were showing, not the...

Guillermo Bruce (22:45):
Those are the sources.

Amir Frank (22:46):
The sources. Sources. So if you had a model, you mentioned a model, with a whole bunch, I mean we've seen models, just completely littered with matter tags. And you can really hardly see and make out what's inside the dollhouse because there are so many matter tags. When you go out to the dollhouse, can you have it so that you really don't see any of them, and then only when you go in, you see the ones that are relevant to you in your area?

Guillermo Bruce (23:10):
Yep. Yes. That would be possible with that.

Raghu Munaswamy (23:13):
And that takes care of, I think, all of the items [inaudible 00:23:16] and this allows you to control what gets shown and what context, and it's helping you manage the tags better as well. The next question from Daniel is, "Any limit on the number or size of boxes, 3D boxes?" I believe that's the...

Guillermo Bruce (23:41):
I would say that the box size cannot be zero, but it probably can be any number greater than zero. Any. It could be large, really large. You can encompass the whole model, if that's what you're referring to, Daniel.

Amir Frank (23:55):
And are you limited to the number of sources that you can throw in there?

Guillermo Bruce (24:00):
We have not put any limits to sources, so you can add as many as is allowed on your machine memory.

Amir Frank (24:10):
Cool.

Raghu Munaswamy (24:11):
I [inaudible 00:24:11] the same question, again. How crazy can one get bit overlapping, or is that a concern? Is that something to keep in mind? I think.

Guillermo Bruce (24:26):
Actually, I would expect them to be overlapping. That's very common to have that, to use overlapping boxes of different sizes or spheres to change the behavior. I would expect that. The visuals may not look nice, but I think the visuals are merely a tool to just see what you're doing. At runtime you wouldn't actually turn those on.

Raghu Munaswamy (24:50):
Yeah, exactly. In other words, to say the colored boxes and the cubes that we're seeing is for the development view. So when we really implement those as sensors within SDK, the end user will not see them. Any other questions, folks? I don't have any at this point.

Amir Frank (25:06):
Can you maybe... I don't know, since you guys have been working on this, you're thinking a lot about the potential that's behind it. Can you give me like this, I don't know, really crazy visionary use case for these tools? What have you thought of in your head like, how this can be used potentially to do some really cool stuff with Matterport?

Guillermo Bruce (25:34):
Well, I think as the models get complex, I think one of the biggest things, this may not seem visually big, but there'll be additional content constantly being added to models, content that comes from our partners, other applications that aren't in the initial model, right? And as those get more complex, you need to be able to manage that by unloading and loading things that are relevant to using the current experience. So I think one of the big things is being able to manage in a spacial way what you see and unloading what you see, such that you're not overloading your system performance.

Raghu Munaswamy (26:15):
Yeah.

Amir Frank (26:17):
Good. That makes sense.

Guillermo Bruce (26:17):
Is that visionary? I don't know. But it's a very important aspect of scaling a Matterport model.

Raghu Munaswamy (26:27):
Yeah. So, just to build on what Guillermo mentioned, sky's the limit. If you look at this, what we're unlocking here is a way to engage with the customers. We know what their perspective is, and where they are. And it may not be visionary to me in my mind what strikes is it. If you can build a VR model and have a... How do we deem around it, where, and I walked to a dark room and things pop out at me as my perspective changes. Now there's lots of possibility with this, as we've got unlocking some basic capabilities. Easier to implement as well, if you think about it.

Amir Frank (27:12):
And these are not yet sensitive to the cursor, just your view and your position within the model.

Raghu Munaswamy (27:20):
Right.

Amir Frank (27:20):
Is that something that you're planning on, creating sensors for a cursor?

Guillermo Bruce (27:24):
We can do that. We already have something called a pointer intersection that serves that purpose, but we likely will want to transfer that to the sensors and sources system. But that's an existing system. Yes. I think there's one more question.

Amir Frank (27:41):
Yes.

Raghu Munaswamy (27:43):
Yes. So, "What's the [inaudible 00:27:44] for having a source per entity was very old, lazy loading purposes?" From Steven.

Guillermo Bruce (27:51):
There's very little overhead for these. They're really lightweight objects. I don't think there's anything to call out as far as resources. So Dustin also works on the SDK with me. He built the system. So maybe this is something he could shed some light on.

Dustin Cook (28:13):
Yeah, set up and tear down should be fairly minimal. That shouldn't really play a part in any kind of resource constraints. There is the potential for additional ray casts against the model to detect occlusion. So maybe that's something you might have to watch out for if you're adding hundreds of these. But otherwise, simple volume tests and the ray casts are really probably the biggest major things. Even then, the collision or the overlap volume collisions is pretty negligible, I believe.

Guillermo Bruce (28:57):
Thank you.

Raghu Munaswamy (28:58):
Yeah. We are at the halfway mark, Amir. I think there are no other questions, we could leave the forum, the emails, a formal plan for the partners to reach out to us. Again-

Amir Frank (29:11):
Yeah, absolutely. [crosstalk 00:29:12] Yeah. So as I mentioned, sorry Raghu. So this is recorded obviously, and we'll have all this information online. We'll share the link in a follow-up email after the webinar, so you can watch this, share with your friends, whatever you want, along with the links to contact us at what is it, developers@matterport.com?

Raghu Munaswamy (29:40):
That's correct.

Amir Frank (29:42):
Okay, perfect. Yeah. With any questions. So that's open to anything that is not specific to sensors, but anything SDK and API-related.

Raghu Munaswamy (29:54):
Yeah. And all new topics as well, Amir. Any folks want to hear any specific spotlight topics from [crosstalk 00:30:01] can point right to us and we will be happy to facilitate a conversation just the way we did it today. Very focused topics or even broader topics, [inaudible 00:30:12] for grabs.

Amir Frank (30:14):
Yeah. Yeah, absolutely. All right. So with that, if there are no other questions about sensors, I hope you found the information useful and enjoyed it, and looking forward to seeing some models and you guys using this stuff.

Raghu Munaswamy (30:34):
Yeah. Absolutely.

Amir Frank (30:35):
All right. I'll give you back 25 minutes. All right. Thanks, everybody.

Raghu Munaswamy (30:44):
Thank you.

Amir Frank (30:45):
Take care. Have a good rest of the day. Bye-bye.
Post 2 IP   flag post
WGAN Forum
Founder &
WGAN-TV Podcast
Host
Atlanta, Georgia
DanSmigrod private msg quote post Address this user
Video: What is Matterport API? | Video courtesy of Momentum 360 YouTube Channel | 5 May 2021
Post 3 IP   flag post
Frisco, Texas
Metroplex360 private msg quote post Address this user
Clickbait 😃. Sorry but there's really no information in this video at all :-) the big takeaway is that there is an API. What would have been good is if it was explained that there is not only an sdk, a more advanced SDK called the SDK bundle, and a Rest API to the matterport cloud system.
Post 4 IP   flag post
WGAN Fan
Club Member
Queensland, Australia
Wingman private msg quote post Address this user
Agree with Metroplex.
API is actually Application Programming Interface and SDK is not what he has said. SDK is a set of tools(libraries) that you import into your code to get access to API.
Post 5 IP   flag post
WGAN Fan
Club Member
Queensland, Australia
Wingman private msg quote post Address this user
Remember in Google Street View app you can connect any external camera to it and use it for capturing. It would be cool if Matterport could provide a way to integrate just about any 360 camera into the capture process. That would open the platform for Pilot cameras and for top range of Insta360(Pro, Pro2, Titan). Currently the toppest resolution of a camera that works with Matterport is 23MP for a Z1.

I believe with resolution starting from 32MP and up to almost 60MP for Titan we should get much better splash. I guess all 3D conversion happens on a pixel level so the more pixels packed into a 360 the more data you get.. probably even covering much greater distance.

If they have it already how it works on the capture level? You cannot just write the whole app similar to the capture app.
Post 6 IP   flag post
104377 6 6
This topic is archived. Start new topic?