How could the update thread tell a node in the render thread to reposition itself?you absolutely do not want to be passing pointers around
Threaded Game Engine
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Entity ID or other handle that both can use. In many cases, you *can* just use the pointer value (as assigned by the render thread) as the object's opaque ID to the rest of the application, but you do not want to be using that value as a pointer in any other part of the application; just use it as a handle. In terms of Ogre, you can't anyway -- you can't access Ogre objects (other than resource manager stuff) from other threads without an almost guaranteed crash.
- Zeal
- Ogre Magi
- Posts: 1260
- Joined: Mon Aug 07, 2006 6:16 am
- Location: Colorado Springs, CO USA
Well I dont mean you would USE the pointer from the update thread. I just mean pass it to/from the render thread, then the render thread (and only the render thread) actually uses it set a node position, ect...you can't access Ogre objects (other than resource manager stuff) from other threads without an almost guaranteed crash.
Thats ok right?
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Sure, that's what I meant by using it as a handle (which makes it easier than maintaining a lookup table of ID --> pointer mappings). The only problem is ensuring it's a valid pointer; you couldn't just use the value passed without some way of checking it first. Nothing is stopping any of your threads from tossing garbage on the queue (other than you of course).
- Zeal
- Ogre Magi
- Posts: 1260
- Joined: Mon Aug 07, 2006 6:16 am
- Location: Colorado Springs, CO USA
Well what are we waiting for? Lets get this party started!
I propose the following simple app to help multi threading noobs (like me) get our feet wet. Lets create something that does just what weve been talking about. Two threads - one to update a nodes position (perhaps a simple sin wave), and another to render said object. The two threads should run in parallel, and only communicate via a 'message queue' (that hopefully wont need to be locked).
I would LIKE to only use Boost if possible (ive heard nothing but good things about it, and besides it doesnt seem like we would need anything fancy). Also, we should try and create a design that doesnt specifically rely on multiple cores (since most of us only have one anyway).
Even if it runs slower than a single thread, I think this would be a important first step for all of us.
*I will start by digging up as much info on boost::thread as I can
I propose the following simple app to help multi threading noobs (like me) get our feet wet. Lets create something that does just what weve been talking about. Two threads - one to update a nodes position (perhaps a simple sin wave), and another to render said object. The two threads should run in parallel, and only communicate via a 'message queue' (that hopefully wont need to be locked).
I would LIKE to only use Boost if possible (ive heard nothing but good things about it, and besides it doesnt seem like we would need anything fancy). Also, we should try and create a design that doesnt specifically rely on multiple cores (since most of us only have one anyway).
Even if it runs slower than a single thread, I think this would be a important first step for all of us.
*I will start by digging up as much info on boost::thread as I can
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
- Game_Ender
- Ogre Magi
- Posts: 1269
- Joined: Wed May 25, 2005 2:31 am
- Location: Rockville, MD, USA
Well you could implement a lock-free queue based on a copy and swap (CAS) instruction. Here is the API for Orocos (a Real Time robotics toolkit) AtomicQueue.
Does anyone know how the CAS instruction works on multicore CPUs?
Boost.Threads also gives you a cross platform C++ threading interface. Along with all the other stuff you get with Boost.
Does anyone know how the CAS instruction works on multicore CPUs?
Boost.Threads also gives you a cross platform C++ threading interface. Along with all the other stuff you get with Boost.
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact:
I agree if you are talking about the whole boost library, but not if you use just the thread bit.xavier wrote:Right, but it's awfully heavy to carry around, just to abstract CreateThread and pthread_create.Game_Ender wrote: Boost.Threads also gives you a cross platform C++ threading interface. Along with all the other stuff you get with Boost.
- Zeal
- Ogre Magi
- Posts: 1260
- Joined: Mon Aug 07, 2006 6:16 am
- Location: Colorado Springs, CO USA
Im working on something right now...
Boost is nice for boost::mutex too (although im going to try not to use it for this simple demo)
*I found a little example that demonstrates 'kinda' what were trying to do (it creates two threads that read/write to a central buffer/queue). Take a look...
At a glance, it LOOKS like it depends on a mutex, but on closer inspection I think thats just to lock the std::cout. Take that out, and everything looks pretty straight forward. Except for that boost::condition stuff... what the hell is that about?
Will have to look it up...
*Akk just found this..
Boost is nice for boost::mutex too (although im going to try not to use it for this simple demo)
*I found a little example that demonstrates 'kinda' what were trying to do (it creates two threads that read/write to a central buffer/queue). Take a look...
Code: Select all
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <iostream>
const int BUF_SIZE = 10;
const int ITERS = 100;
boost::mutex io_mutex;
class buffer
{
public:
typedef boost::mutex::scoped_lock
scoped_lock;
buffer()
: p(0), c(0), full(0)
{
}
void put(int m)
{
scoped_lock lock(mutex);
if (full == BUF_SIZE)
{
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout <<
"Buffer is full. Waiting..."
<< std::endl;
}
while (full == BUF_SIZE)
cond.wait(lock);
}
buf[p] = m;
p = (p+1) % BUF_SIZE;
++full;
cond.notify_one();
}
int get()
{
scoped_lock lk(mutex);
if (full == 0)
{
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout <<
"Buffer is empty. Waiting..."
<< std::endl;
}
while (full == 0)
cond.wait(lk);
}
int i = buf[c];
c = (c+1) % BUF_SIZE;
--full;
cond.notify_one();
return i;
}
private:
boost::mutex mutex;
boost::condition cond;
unsigned int p, c, full;
int buf[BUF_SIZE];
};
buffer buf;
void writer()
{
for (int n = 0; n < ITERS; ++n)
{
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout << "sending: "
<< n << std::endl;
}
buf.put(n);
}
}
void reader()
{
for (int x = 0; x < ITERS; ++x)
{
int n = buf.get();
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout << "received: "
<< n << std::endl;
}
}
}
int main(int argc, char* argv[])
{
boost::thread thrd1(&reader);
boost::thread thrd2(&writer);
thrd1.join();
thrd2.join();
return 0;
}
Will have to look it up...
*Akk just found this..
Which seems to imply that in order to implement such a 'get/put' object you NEED to use a mutex. That kinda blows our whole lockless que theory to hell.. there must be another way?A condition variable is always used in conjunction with a mutex and the shared resource(s).
Last edited by Zeal on Fri Dec 01, 2006 6:04 am, edited 1 time in total.
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact:
Except that he has typedef'd boost::mutex::scoped_lock to scoped_lock and those are scattered around the program appropriately. But mutex is part of the thread stuff so it's still no big deal.Zeal wrote:At a glance, it LOOKS like it depends on a mutex, but on closer inspection I think thats just to lock the std::cout.
- Zeal
- Ogre Magi
- Posts: 1260
- Joined: Mon Aug 07, 2006 6:16 am
- Location: Colorado Springs, CO USA
Mutex is a way of locking the resource (buffer in this case) so other threads will have to wait if the resource is being used by another thread. This will be a NIGHTMARE for us, since both the update and render thread will be reading/writing to this buffer/queue so frequently..But mutex is part of the thread stuff so it's still no big deal.
Right?
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
Yeah, just create the "render thread", the "logic thread" and the "control thread" (which actually creates and maintains the queues). Create the control thread first, and pass a pointer to it as part of the startup param data to the other two so they can put messages in / get messages from the queue. You might be able to get away with using std::queue for this, or you might have to write a simple queue class to handle it.
- Frenetic
- Bugbear
- Posts: 806
- Joined: Fri Feb 03, 2006 7:08 am
It is my impression that lockless queue idioms are fairly easy to come by.
I need to brush up on my concurrent programming skills, so I am watching this thread.
Fun thing to Google for.The lockless queue consists of an array of object pointers, a head pointer and a tail pointer. The key to the lockless queue is that the writing thread is only allowed to modify the head pointer and the reading thread is only allowed to modify the tail pointer.
I need to brush up on my concurrent programming skills, so I am watching this thread.
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact:
Zeal wrote:But youre saying if I used CreateThread() instead, I could run the same code, with no worry of mutex/locking issues?
NO!, if you are touching the same resources from different threads you need to lock them! Period! If you are reading then you can have multiple threads reading from there at the same time, but if you are writing then you need to allow only 1 thread to write to it and you cannot read while writing or you may get garbage!xavier wrote:Yeah
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact:
What happens if you've got a situation where you're just starting out or if you have a really fast reader and your head and tail pointers point to the same thing?Frenetic wrote:It is my impression that lockless queue idioms are fairly easy to come by.
Fun thing to Google for.The lockless queue consists of an array of object pointers, a head pointer and a tail pointer. The key to the lockless queue is that the writing thread is only allowed to modify the head pointer and the reading thread is only allowed to modify the tail pointer.
I need to brush up on my concurrent programming skills, so I am watching this thread.
- Praetor
- OGRE Retired Team Member
- Posts: 3335
- Joined: Tue Jun 21, 2005 8:26 pm
- Location: Rochester, New York, US
- x 3
- Contact:
- Frenetic
- Bugbear
- Posts: 806
- Joined: Fri Feb 03, 2006 7:08 am
Here's something!
http://ldk.sourceforge.net/classLDK_1_1 ... Queue.html
Under a BSD style license, so if it's useful the source can be fully raped and pillaged.
One thing I recall reading is that most "lock-free" idioms assume Intel architecture, where reading a resource without locking it is safe; does anyone know if it's true that simply reading something in a multithreaded environment using some non-Intel CPUs will cause Very Bad Things to happen?
http://ldk.sourceforge.net/classLDK_1_1 ... Queue.html
Under a BSD style license, so if it's useful the source can be fully raped and pillaged.
One thing I recall reading is that most "lock-free" idioms assume Intel architecture, where reading a resource without locking it is safe; does anyone know if it's true that simply reading something in a multithreaded environment using some non-Intel CPUs will cause Very Bad Things to happen?
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact:
- xavier
- OGRE Retired Moderator
- Posts: 9481
- Joined: Fri Feb 18, 2005 2:03 am
- Location: Dublin, CA, US
- x 22
That's not true. If your queue updates its element count *after* updating pointers, it doesn't matter whether you are reading to or writing from it -- if the reader sees a zero elem count, it doesn't try to pop any element from the front. And it does not matter on the writing side whether the elem count is zero or greater -- the pointers will be valid prior to the element count being updated because the only pointer changed on the write side is the tail. In this case, the element count *is* your gatekeeper, without involving heavy OS support in the way of mutexes or other synchronization objects.CaseyB wrote:Zeal wrote:But youre saying if I used CreateThread() instead, I could run the same code, with no worry of mutex/locking issues?NO!, if you are touching the same resources from different threads you need to lock them! Period! If you are reading then you can have multiple threads reading from there at the same time, but if you are writing then you need to allow only 1 thread to write to it and you cannot read while writing or you may get garbage!xavier wrote:Yeah
This will work because of the timestepped natiure of computer processing -- data is advanced on a clock cycle, rather than in real life (ignoring quantum arguments) where there can be uncertainty at given instants in time as to the state of the element count.
- CaseyB
- OGRE Contributor
- Posts: 1335
- Joined: Sun Nov 20, 2005 2:42 pm
- Location: Columbus, Ohio
- x 3
- Contact: