This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[13:57:34] <ewittman> temporalfox, vertx question (not related to the reverse proxy project, but rather a different one I'm working on)

[13:57:45] <temporalfox> hi ewittman

[13:57:53] <ewittman> temporalfox, good afternoon :)

[13:57:56] <temporalfox> indeed

[13:58:01] <temporalfox> good morning then :-)

[13:58:05] <temporalfox> what is the question ?

[13:58:26] <ewittman> temporalfox, is there a way to route incoming http requests to specific instances of a verticle?

[13:59:11] <ewittman> temporalfox, for example, if the request is a REST call and included some sort of ID - requests related to that ID could always be routed to the same verticle instance

[13:59:46] <temporalfox> what do you mean by instance ?

[13:59:48] <temporalfox> instances of the same verticle class

[13:59:56] <ewittman> yes exactly

[14:00:11] <temporalfox> what would be the purpose ?

[14:00:20] <temporalfox> you want to sticky ness of the state ?

[14:00:21] <ewittman> the idea is to eliminate the need for synchronization

[14:00:32] <temporalfox> to/some

[14:01:20] <temporalfox> so the verticle instance maintain some state that client relies on ?

[14:01:38] <temporalfox> for that client

[14:01:47] <ewittman> not pinned to the client, no

[14:01:56] <ewittman> imagine a REST server that managed a number N of object Foo

[14:02:02] <ewittman> where Foo had some state that could be mutated

[14:02:19] <ewittman> And multiple clients could attempt to mutate any of the Foos

[14:02:42] <ewittman> With a single verticle instance, there's no need to synchronize anything (not the list of Foos, nor each individual Foo)

[14:02:54] <ewittman> But with multiple verticles, obviously we need to synch on both those things.

[14:03:00] <ewittman> Correct?

[14:03:05] <temporalfox> that's why usually you use sharedData

[14:03:10] <temporalfox> and vertx provides that

[14:03:22] <bgeorges> Hi temporalfox

[14:03:26] <temporalfox> hi bgeorges

[14:03:45] <bgeorges> How is life in retail sector temporalfox

[14:04:19] <temporalfox> bgeorges actually I don't know , you're pinging the wrong person I think :-)

[14:04:34] <temporalfox> if you have issues with synchronization between the Foo's

[14:04:38] <bgeorges> temporalfox: Yes, I got confused :)

[14:04:46] <temporalfox> you can accumulate state in each foo

[14:04:51] <temporalfox> and periodically merge the Foo's state in a single Foo

[14:05:05] <temporalfox> if that's what you want to do

[14:05:14] <temporalfox> i.e you batch

[14:05:55] <ewittman> hm - i'm trying to imagine how that would work in this case

[14:05:57] <temporalfox> bgeorges n/p it happened to others in the past :-)

[14:06:20] <temporalfox> I don't know what are you business data ewittman

[14:07:15] <ewittman> sure - i'm working on a rate limiting micro-service, which is basically just a REST server that manages N “rate limits” where a rate limit is a counter with a max value and a reset time

[14:07:22] <bgeorges> temporalfox: I can believe that :)

[14:07:51] <ewittman> so, temporalfox, lots of clients would be trying to increment multiple rate limits (counters) very often

[14:08:26] <ewittman> so lots of requests spread across multiple counters, all trying to increment them until the rate is reached

[14:08:37] <ewittman> *until the limit is reached

[14:08:49] <temporalfox> is it an hard limit ?

[14:08:54] <ewittman> yes

[14:09:04] <temporalfox> or can you tolerate a few more ?

[14:09:23] <temporalfox> I see, so you cannot do batch

[14:09:55] <ewittman> what I have now is a single-verticle implementation that simply manages the rates in memory and doesn't use any synchronization when doing the rate limit logic

[14:10:00] <temporalfox> it means that you need to synchronize indeed

[14:10:01] <ewittman> and it works great and performs very well

[14:10:07] <temporalfox> how do you synchronize at the moment ?

[14:10:11] <temporalfox> compare and swap ?

[14:10:25] <temporalfox> have you tried with AtomicFieldUpdater ?

[14:10:41] <ewittman> no synchronization at the moment - only using a single verticle so there's only ever one thread modifying the rates

[14:10:54] <temporalfox> ok

[14:10:55] <ewittman> there's very little logic so a single thread can process a lot of rates

[14:11:03] <ewittman> something like 10k per second right now

[14:11:04] <temporalfox> if it's just a counter, there is an efficient class for that

[14:11:06] <temporalfox> in java 8

[14:11:10] <temporalfox> LongAdder

[14:11:28] <temporalfox> it's designed for this purpose I think

[14:11:36] <ewittman> yes - we are going to create an implementation using ConcurrentHashMap (for the collection of rate limits) and something like AtomicLong for the counter

[14:12:12] <ewittman> but I was just thinking that if a rate limit could be “pinned” to a verticle instance, then we'd be back to not needing any synchronization at all

[14:12:27] <temporalfox> no you cannot do that

[14:12:27] <ewittman> if vertx had something for that then I thought we'd give BOTH impls a try and see which one was faster :)

[14:13:10] <ewittman> http request routing to verticle instances is done how? round robin?

[14:13:23] <temporalfox> it is per connection

[14:13:38] <ewittman> ah interesting

[14:13:53] <temporalfox> when a client connects, it is assigned an handler

[14:14:07] <temporalfox> so you can have multiple requests hit the same handler

[14:14:13] <temporalfox> if they use the same connection

[14:14:39] <ewittman> ok understood - so that definitely makes it hard/impossible to route per-request based on state in the request

[14:15:48] <ewittman> temporalfox, OK thanks! We'll see how the synch impl compares to the single-verticle impl (on an 8 core server)

[14:16:20] <temporalfox> the problem you will have does not actually depend on vertx

[14:16:27] <temporalfox> it's more a mechanical sympathy thing

[14:16:46] <temporalfox> and concurrency between cores

[14:17:17] <ewittman> ooo - not sure what that means… can you explain a bit?

[14:22:50] <temporalfox> I mean that your problem is to share state between threads

[14:22:55] <temporalfox> efficiently

[14:23:15] <temporalfox> and AtomicLong seems what you need to use

[14:23:29] <temporalfox> and you should check it works good enough for you

[14:23:57] <ewittman> right - we're absolutely going to do that and check the results

[14:24:48] <ewittman> I wouldn't even consider attempting something else, except that the reset logic (the rate limit resets to 0 at a particular moment in time) does require some additional thread coordination

[14:25:09] <ewittman> Although it's not a huge deal, and standard java 8 concurrency approaches will likely be just fine

[14:25:22] <ewittman> I was just looking for another approach to test :)

[14:27:41] <ewittman> temporalfox, thanks again - and in other news, I'll have a PR for you today on the reverse proxy project - you can tell me if any of my changes are useful :)

[14:28:05] <temporalfox> ewittman sure I'll do

[14:43:47] <ewittman> temporalfox, https://github.com/vietj/vertx-reverse-proxy/pull/1

[14:44:26] <temporalfox> thanks ewittman will have a look soon!

[14:44:44] <ewittman> temporalfox, no rush, really

[16:43:03] <Sticky_> does anyone know, does the callback on “put” for a cluster wide AsyncMap, is that called when every cluster member has accepted the new value? Or is it something weaker like the local map has been updated?

[18:37:36] <JordyBaylac> I have some question here https://groups.google.com/forum/?fromgroups#!topic/vertx-dev/BGJq0UV-OzY

[19:29:38] <temporalfox> hi, this is the wrong google group

[19:30:01] <temporalfox> he left already

[21:24:33] *** ChanServ sets mode: +o temporal_

[22:17:36] <AlexLehm> temporal_: it is so anooying that freenode doesn't have a offline msg function by default

[22:17:46] <temporal_> AlexLehm hi

[22:17:47] <temporal_> why ?

[22:17:51] <temporal_> there is vertx-dev for this

[22:18:13] <temporal_> or twitter

[22:18:36] <AlexLehm> i mean when somebody is in the channel and asks a question and leaves after 10 minutes, i would prefer to be able to send a msg

[22:18:57] <temporal_> ah yes

[22:19:02] <temporal_> good point

[22:32:14] <AlexLehm> there is a offline messaging function in freenode, but it only works for authed users that have activated it, so its not helpful for users that are new

[22:32:40] <AlexLehm> otoh, when somebody only logs in once into irc, they will not get the offline message anyway