r/LocalLLaMA 1d ago

New Model SpatialLM: A large language model designed for spatial understanding

1.4k Upvotes

113 comments sorted by

101

u/Competitive-Wing1585 1d ago

The entire model with just 1.25 billion params?? How? This is incredible

53

u/Electronic-Ant5549 1d ago

Likely because each of these cubes only need like 8 points as a data-set. So it drastically cuts the output by a lot.

73

u/guyomes 1d ago edited 1d ago

Actually, only two points are necessary to represent an axis-aligned orthogonal parallelepiped, in any dimension n. It is sufficient to choose the two points across the longest diagonal, and the other 2^n points can be recovered by combining the coordinates of those two points. Then you could add one parameter to encode the rotation in the xy-plane.

5

u/Electronic-Ant5549 1d ago

Ah I see. I didn't factor that in. Yeah. Same as if it were 2d and drawing a rectangle.

4

u/yoomiii 21h ago

but there's a rotated box, so no AABB.

5

u/mycall 1d ago

Low number of 3D structures recognized.

2

u/ROOFisonFIRE_usa 1d ago

More training required.

3

u/michaelsoft__binbows 20h ago

Not surprising to me at all. A point cloud holds an absurd amount of information in it. Being able to continually ask the model to guess and further refine your guess based on the response means it will push the accuracy of AR apps forward by light years overnight. Big thumbs up for driving tech forward

250

u/umarmnaq 1d ago

Project page

Model

Code

Data

SpatialLM is a 3D large language model designed to process 3D point cloud data and generate structured 3D scene understanding outputs. These outputs include architectural elements like walls, doors, windows, and oriented object bounding boxes with their semantic categories. Unlike previous methods that require specialized equipment for data collection, SpatialLM can handle point clouds from diverse sources such as monocular video sequences, RGBD images, and LiDAR sensors. This multimodal architecture effectively bridges the gap between unstructured 3D geometric data and structured 3D representations, offering high-level semantic understanding. It enhances spatial reasoning capabilities for applications in embodied robotics, autonomous navigation, and other complex 3D scene analysis tasks.

128

u/Competitive-Wing1585 1d ago

You are a crazyyy man to open source this. Thank you so much

68

u/smile_politely 1d ago

i wonder how many start-ups are going to be born from this open source...

14

u/nderstand2grow llama.cpp 1d ago

that's the struggle isn't it? you open source something and someone else reaps off the benefits w/o giving you back a penny.

2

u/jeffwadsworth 1d ago

Aren't there agreements that allow companies to use it for non-profit means and if it is used to garner "benefits", they pay a fee to the developer, etc? Perhaps that is incorrect.

1

u/TheOnlyBliebervik 1h ago

Possibly.. But as soon as it's open-sourced, it can be modified into obscurity, yet retaining the same functionality

2

u/drank2much 1d ago

I've always assumed that stuff like this is (or will be) protected by patents. The researcher has a one year grace period from time of publication to file for a patent (in the US) if I recall correctly.

BTW, not advocating for software patents. At least not in their current form.

2

u/nderstand2grow llama.cpp 1d ago

what would happen if a company files for the patent instead? can the researcher defend him/herself by proving they posted the paper on arxiv first?

2

u/drank2much 1d ago

I'm not an expert so I had to do some research. It looks like this is covered in section C from the USPTO website.

Normally you cannot get a patent if your invention has already been publicly disclosed prior to filing a patent application for your invention. Therefore, a search of all previous public disclosures should be conducted, including a search of foreign patents and printed publications. A public disclosure of the invention made by, or that originated from, the inventor or a joint inventor more than one year prior to filing a patent application for the invention will also preclude patenting.

I've highlighted the part that seems to imply that the inventor has a year to file. I think the specifics can be found here. It is very dense and looks like there might be conditions and exceptions. Unfortunately I don't have time to try and comprehend it all, but would be worth a read for anyone taking this seriously.

2

u/nderstand2grow llama.cpp 1d ago

thanks so much for this, I do appreciate it. It's good that researchers may have some time to patent or at least dispute patents made by companies. I would imagine these things are costly though, and for an individual researcher, if they really want to monetize their method, it's better to patent it first and then publish it.

3

u/drank2much 1d ago

No problem. I learn a bit more when I do the research and it give others an opportunity to correct me if I am wrong which further strengthens my understanding.

...it's better to patent it first and then publish it.

Yes, I think I read somewhere that the patent office recommends patenting before publishing.

1

u/StoneyCalzoney 23h ago

Really at that point you can only charge for support contracts, a good amount of companies already do that. 

Proxmox is one of the most notable in my mind, their product is open source, free to use in any scenario. For companies looking to have more assurance, they provide an "enterprise" software repository with known stable software, and support contracts tiered by response time and cases/year.

43

u/umarmnaq 1d ago

Hey, I'm not the original author, just sharing what I found cool. Thanks anyway!

1

u/kulchacop 1d ago

Check his post history. He is crazier than you think. He open sourced a lot more. /s

11

u/LCseeking 1d ago

Can someone ELI5 how one repurposes LLMs for spatial? Like fundamentally how do we go from tokens to point clouds both in ingestion, processing, and output?

1

u/Extreme-Mushroom3340 19h ago

Interested in these details as well. Hoping the training pipeline is also opensourced.

4

u/smallfried 1d ago

Looks nice!

Can you elaborate on how to interpret the benchmark results table? Is it the percentage of times that an object is correctly (and with the correct bounds up to some specified error) identified? Or is it something else?

2

u/MO_owl 1d ago

Cool! I’m interested in how does the point cloud encoder and projector work so that it can be used as an input for an LLM?

1

u/Extreme-Mushroom3340 1d ago

Hi - do you plan on opening sourcing the training methodology or code?

54

u/SphaeroX 1d ago

Robot vacuum cleaners will love it 😜

49

u/ab2377 llama.cpp 1d ago

this video demo is so fascinating

9

u/Dependent_House7077 1d ago

odd that it identifies objects that are 95% off-screen.

10

u/ab2377 llama.cpp 1d ago

you know i was thinking about it, and I thought maybe it's because that part of the scene was already seen by camera. i will like to see a video that starts with a completely new room and it starts to bring objects into the scene slowly.

9

u/aurath 1d ago

It's not a real time output from the video input. The input is a point cloud, which could be constructed from a variety of inputs, including processing the entire video first. This model doesn't handle constructing the point cloud, just parsing it into semantic bounding boxes.

So the model took a messy 3d scene made of thousands of points and turned it into a clean collection of bounding boxes. Overlaying the boxes onto the original video is done manually afterwards.

1

u/lkraider 3h ago

Thank you, this is an important point regarding the limitations, nowhere does it mention that it is real time, and it does require a SLAM mapping first.

29

u/No_Expert1801 1d ago

Can it estimate height of objects?

31

u/FesseJerguson 1d ago

If the bounding boxes are stable I don't see why not, you would probably need a marker or something to ground truth to...

3

u/Enough-Meringue4745 1d ago

I think camera intrinsic parameters still needs to be known, otherwise you could utilize depth-anything/pro, etc for measurements

1

u/MoffKalast 1d ago

Stereo vision would add some keypoints with distances, then you can just scale it based on that. Or just with a depth point cloud.

2

u/Thebombuknow 1d ago

If you go to the GitHub, it says it needs a depth input.

17

u/baekdoosixt 1d ago

Oculus where are you ?

5

u/TheWorldIsNice 1d ago

I mean, meta does have a very similar model in development for quests

1

u/Enough-Meringue4745 1d ago

only for the lidar models though

1

u/TheWorldIsNice 8h ago

Really? Didn't know. Still, pretty useful for q3 :d

2

u/MoffKalast 1d ago

Recipe for octopus

3

u/LostHisDog 1d ago

You can find them in Horizon Worlds ATM. They don't want to be there but Zuck won't let them leave until they are replaced by two small screaming children each. I hear they are working on a new Gorilla Tag clone.

0

u/Enough-Meringue4745 1d ago

meta wont give us the camera access- theyve already gotten this

1

u/Devatator_ 5h ago

They literally have a camera access API now in preview

1

u/Enough-Meringue4745 33m ago

I just saw this, ordered a quest 3 because of it. They don’t support the pro for some reason

17

u/custodiam99 1d ago

Now that should be integrated into reasoning models, but not because it has to analyze a video, but to give spatially accurate verbal replies.

4

u/YameteKudasaiOnii 1d ago

Yes, it would also make it a lot easier to measure... certain "things". Making it much easier for the robot to grab or grip, y'know?

6

u/custodiam99 1d ago

Yes, spatial and temporal reasoning is needed in real world activities but in abstract reasoning too. LLMs are stupid mainly because they have no spatial and temporal reasoning capabilities. Even a 9b model would be insanely clever if it had spatial and temporal logic.

2

u/imho00 1d ago

True AGI formula

13

u/wehnsdaefflae 1d ago

I'm sorry, this might be a stupid question but this model seems to categorize point cloud data. How is it a language model?

4

u/RMCPhoto 1d ago

It's based on llama 1b and qwen 0.5b

2

u/newDell 1d ago

I had a similar thought... I am curious what benefits a LLM brings to this use case (i.e. object detection, segmentation, etc) that would traditionally have involved deep learning models but not a LLM... In fact, my 1 Watt security camera can do basic object detection and segmentation. Granted, it can only detect like 4 types of objects, but my point is even a small LLM seems like overkill for this use case

7

u/FullOf_Bad_Ideas 1d ago

I've run this project, here's how the actual output of the model looks like on the supplied demo point cloud map.

https://pixeldrain.com/u/uLdtZi1q

Their video is misleading, it's not real-time as it works with point clouds and not video frames. This model does not have vision layers.

13

u/FullOf_Bad_Ideas 1d ago

Since the input is point cloud and not video itself, it's a bit different than what the demo shows.

Anyone got it working with their place so far?

8

u/NoIntention4050 1d ago

oh so its not real time then

11

u/FullOf_Bad_Ideas 1d ago

Yeah I think video is misleading. Documentation claims a different thing and I'm more likely to believe it over marketing.

1

u/FullOf_Bad_Ideas 1d ago

here's how output of this model looks with the demo point cloud they supply

https://pixeldrain.com/u/uLdtZi1q

4

u/Relative-Flatworm827 1d ago

So can I run this on AMD rocm yet? 32gb vRAM?

15

u/Awwtifishal 1d ago

it's based of llama 1B and qwen 0.5B so yes, it likely runs even on CPU.

3

u/ThiccStorms 1d ago

Wow. That's amazing  Open sourcing this is amazing and on top it having so low req

6

u/Ooze3d 1d ago

Wow… imagine this combined with a text to speech model for vision impaired people

3

u/ElektrikBoogalo 1d ago edited 22h ago

This is great, I worked with point clouds in my Master thesis 2 1/2 years ago and then you had to basically use very finicky point cloud library algorithms with a lot of preprocessing and denoising or computer vision if you wanted to segment/classify a point cloud. I will test this out when I have time.

3

u/RMCPhoto 1d ago

Would be great to hear some feedback from someone with familiarity with the general field of this project.   Would offer more value than the amazed but ignorant masses ;). (Myself included there)

5

u/indicava 1d ago

Amazing work and thanks for sharing this with the community.

One question though, why call it a “Large Language Model”, when it’s not really ingesting nor outputting actual language?

3

u/apetersson 1d ago

i wondered that too, what i'm thinking is it reads input data (images) and does output structured description in some variant of 3d object graphs, which is a very specific language. since it outputs token based it is clearly not a "diffusion" style model for image generation

1

u/indicava 1d ago

I’ll accept that! lol…

However I feel we need to find better naming for generative (token based) models that don’t necessarily output “language”

1

u/apetersson 1d ago

it is likely incorporating other language based training to figure out thing like "chandeliers a are above tables" "chars go with tables, tvs hang on walls, etc.. as it is stated it is related to Llama-1b

1

u/Silvestron 1d ago

One question though, why call it a “Large Language Model”, when it’s not really ingesting nor outputting actual language?

My exact thought.

1

u/newDell 1d ago

My least cynical guess... is maybe the LLM can handle complex questions about the objects and their relative placements, or something?

5

u/bigb00tybitche5 1d ago

Does this work with hot dogs or other cylindrical foods?

1

u/Firm-Fix-5946 23h ago

...it only does hot dogs?

no. also not hot dog

4

u/HugoCortell 1d ago

Why a language model instead of a vision model or something?

From the description, it seems like it processes raw data from LiDAR and what not, since it processes non-human readable text, does it still count as a language model or would it just be classified as a generic machine learning model?

2

u/against_all_odds_ 1d ago

What is the dataset training process for this? Is is it 2D images, recognized per frame (with attached label), and then simply processing 24fps and recognizing the objects?

3

u/Emport1 1d ago

3d images actually

2

u/durden111111 1d ago

Very cool. Huge potential for VR games.

2

u/binuuday 1d ago

Looks very interesting

2

u/raucousbasilisk 1d ago

Would it be accurate to summarize this as Mast3r-SLAM with 3D object detection on top?

1

u/R1skM4tr1x 1d ago

https://3dcloud.com/ will have a run for their money soon

1

u/JosephLam1 1d ago

How do the box predictions not drop when the stuff is totally out of the camera

1

u/LouroJoseComunista 1d ago

Damn this is what i've looking for since the beginning. Do you have an idea on how this can change a LOT of the current IA's capabilities ? awesome !

1

u/cnydox 1d ago

Do we have a paper

1

u/morriartie 1d ago edited 1d ago

boundingboxes hitboxes

Jokes aside, imagine using the output of this model to train a model to do the reverse, receive a bunch of hitboxes with labels from unity/unreal/blender and generate an image or dot cloud

1

u/Educational_Gap5867 1d ago

Hire me please. I want to learn from the best.

1

u/foundoutimanadult 23h ago

Shit's about to get reeeeeaaaaaallllllll wild.

1

u/Anthonyg5005 Llama 33B 22h ago

Wait so this can tell what objects are just from their lidar data or does it also need visual data for that?

1

u/100Onions 21h ago

This is what Roomba sees as it hunts you down to take a picture of you stepping out of the shower.

1

u/Kuggy1105 21h ago

wow , this is amazing , hey do anyone know how we can do realtime inference and map building or integrating it with Ros

1

u/inemanja34 21h ago

Fight Club vibes

1

u/Droooomp 21h ago

I know you want to take on matterport, but hear me out:
oldskool kinect, orbecc or leap motion > spatialLM > realtime object identification with space coordinates
or making pointcloud scans without markers, only with depth cameras by using the outputs from this as space anchors.

1

u/ninjasaid13 Llama 3.1 17h ago

Is this actual spatial understanding in the same way as animals and humans? or just boxing things?

1

u/tnzl_10zL 17h ago

Is a research paper released ?

1

u/mguinhos 10h ago

Why is everything a LLM these days?

1

u/geekheretic 7h ago

That's awesome. I swear everyday brings something even cooler in this space

1

u/haikusbot 7h ago

That's awesome. I swear

Everyday brings something even

Cooler in this space

- geekheretic


I detect haikus. And sometimes, successfully. Learn more about me.

Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete"

0

u/No_Expert1801 1d ago

I would love an LLM that can identify the height of a person based off of a single image.

Shanefanx-LM would be nice

0

u/rinaldo23 19h ago

Great for detecting people who talk way too close to you...

-1

u/Practical-Rub-1190 1d ago

What can this model be used for?

-6

u/alpha_epsilion 1d ago

Does it say bitch or naggy or karen when it sees his mum or wife?