Adding objects. Drag n' drop (queues), (traffic source), (traffic sink), or (splitter) from left panel into canvas
Connecting. Drag and drop over the destination object
Properties, Disconnect and Delete. Double-click
Contact us at firstname.lastname@example.org
This model is simple enough to analytically determine the steady state properties of the system (in the following, ρ = λ / μ):
- Server utilization -- fraction of time server was busy = λ / μ
- Average time spent by customers waiting in queue = ρ / (μ - λ)
- Average time spent by customers in obtaining service (time spent waiting in queue + time spent get service from server) = 1 / (μ - λ)
- Average length of queue = ρ * ρ / (1 - ρ)
- Average number of customer inside the system (being serviced or waiting in queue) = ρ / (1 - ρ)
Try out different values of λ and μ and see what happens!
Exercises: Think what should happen before you try these. What will happen when arrival rate is same as service rate (that is, λ = μ)? When arrival rate is higher? Keep the service rate constant and change arrival rate such that average queue length is about 5. Try again for queue length 50. What can we infer?
Life at DMV: Alice is a fastidious clerk whose responsibility is to review the forms and approve them. She takes exponential time to review the forms, AND she ends up rejecting a fraction of application (because they were not complete). The rejected customers immediately fix their application and join the queue again at the end. We would like to know what is the average time taken for customers to get their applications approved, and how many customers are within the system.
Modeling: Alice is modeled as a M/M/1 service. The output from the service is fed into a splitter that loops back a fraction of traffic back to queue and the remaining exit the system. The sink measures: (a) Population -- the number of customers within the system, and (b) Stay duration -- average time that customers (who exited) spent within the system.
Exercises: What happens when Alice rejects 0%, 10%, 50%, 90% of applications? Which is better for customers: Alice who rejects 10% applications, or Bob who works twice as fast as Alice but rejects twice as many applications?
The joys of government office: Alice was a good employee (see 'Feedback'), Chuck and Dave are not. They work in one government office and are responsible for approving loan applications. Customers line up for Chuck's desk first, but Chuck approves only some of them. The remaining he sends over to Dave. Dave also approves only some of the customers, and send the remaining back to Chuck. We would like to know what is the average time taken for customers to get their applications approved, and how many customers are within the system.
Modeling: Both Chuck and Dave are represented as M/M/1 services. The output of both services is fed into a 'splitter', which loops back a fraction of traffic to the other service, while the remaining exit the system. The 'sink' records the statistics we are interested in.
Exercises: What should be the difference between: when Chuck rejects 30% and Dave rejects 70% applications, versus Chuck rejecting 70% and Dave 30%? See if you can intuitively predict before running the simulation.
Real life example? We don't know.. it just looks pretty! Seriously though, this is a "network of queues" where output of one M/M/1 service feeds the next service. This is an important problem in assembly line productions where we would like to study the impact on one slow service on the overall throughput of the system.
In the above scenario, all services have identical rates. Try to vary them and see what happens.
- The location of slow server? Say there are three servers with identical rates, and one with a slower rate. Where would you put this server -- first? last? See if you can intuitively predict, before you simulate all four combinations.
- Say the four servers have rate 1, 2, 3 and 4. How would you arrange them -- ascending? descending? See if you can intuitively predict, before you run the simulations.
Life of [un]happy programmer: Programmers write code; something fails (aka bugs) and they have to fix it again. The good code is sent to testers who reject problematic code and send it back to programmers. The programmers work on code from three sources: original code coming from their managers, code that was rejected in the coding stage, and the code that was rejected in the testing stage. We would like to know what is the rate at which code is shipped by our software firm?
Modeling: Both the programming and testing stages are represented by M/M/1 queues ('queue_1' and 'queue_2', resp). Output from programming service is fed to splitter, which sends a fraction (the good code) to testers, and loops back the remaining back to programmers. The testers, similarly, send a fraction of traffic (the good code) for shipping (the sink) and remaining back to the programmers. The 'sink' records the statistics we are interested in.
Exercises: Do you agree with "its better to find problems sooner than later"? Try out these two cases: 10% code rejected after queue_1 and 30% after queue_2, and vice versa.
Only the best for our customers: Acme corporation sells paper cups. Before shipping, Acme runs each paper cup through a four-stage Quality Assurance pipeline to make sure there are no leaks, the cups have correct size, color and strength. In each stage, only the accepted cups are passed on to the next stage of testing. The testing takes exponential time and the arrivals from the factory is Poisson. We would like to know how much time is spent in QA, and if we can speed it up.
Modeling: Each of the four stages is represented as M/M/1 servers. All servers are connected to 'spiltters' which reject a fraction of output, and send the remaining to the next server or shipping. The sinks tell us what how much time was cumulatively spent after each testing stage.
- Suppose the four stages have defect rate of 10%, 20%, 30% and 40%. How would you organize these stages: ascending defect rates or descending defect rates? Why?
- Suppose there is a stage that takes much longer time than other stages but results in much less defects? Where would you place this stage -- first or last? How about when it produced highest defect rates also? Did the simulation results matched your intuitions?
On a busy highway: Life is usual at the freeway: cars enter and cars leave. The freeway has four segments, each modeled as M/M/1 queue (why? because we can!). At each segments, some cars arrive and some leave. We would like to know steady state probabilities of overall throughput and the time that cars spend in the freeway.
Exercises: Reason intuitively first on what should happen before you try these cases: (a) one segment has very high input and output rate, (b) one segment is the "bottleneck" (has very high service rate), (c) freeway exit rate is same, lower or higher than entrance rates.
And the trophy goes to __: We have a competition is progress and we are looking at the semi-finals and the final stages. Competitors arrive in the semi-finals stages, some lose and fall out from competition. The winners proceed to final stage where they come out as winners or runner-ups. This 'pyramidical' model is different from most of the others, which were 'linear'. We would like to know how much time is spent by contestants in this competition.
Exercises: What will be impact on overall statistics if one semi-final stage takes a long time and has higher rejection rate? What about the other three combinations of service rate and rejection rate?