This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[17:06:51] * ChanServ sets mode: +o purplefox [18:04:09] <zerkz> Concerning PemTrustOptions, and adding trust to a HTTPS Client. [18:04:18] <zerkz> You can have multiple certs/cert paths, right? [18:05:06] <zerkz> not sure if its a bug… but the client is only using the last cert on the list. [18:05:25] <zerkz> i see 3.1.0 is out, i'll use that [18:21:02] <zerkz> upgraded to 3.1.0, not looking like its fixed. Is this intended or is the documentation just wrong? I'll log an issue. [19:16:32] <zerkz> bug filed : , looks like the count variable was never incremented so the certs just replace each other. [19:29:15] * ChanServ sets mode: +o temporalfox

[21:52:09] <jtruelove_> julien v are you in here today?

[22:35:10] <temporal_> jtruelove_ hi

[22:41:51] <jtruelove_> hey

[22:42:07] <jtruelove_> thought it might be easier to discuss the thread here

[22:42:50] <jtruelove_> so does the way SPI work does it create one SPI implementing metrics instance per JVM or is it per verticle instance

[22:43:22] <jtruelove_> the original design of vertx-opentsdb was to minimize outbound connections to the opentsdb cluster

[22:43:43] <jtruelove_> seems like you are suggesting an SPI connection plus one per verticle instance

[22:44:31] <jtruelove_> the benefit of the event bus approach was that it made it easy to have one reporter etc..

[22:45:13] <temporal_> good idea

[22:45:28] <temporal_> there is one SPI per Vertx instance

[22:45:48] <temporal_> I agree event bus allow to decouple things

[22:46:29] <jtruelove_> so that is where the arch gets a tiny bit tricky then

[22:46:41] <temporal_> so actually a friend of mine

[22:46:49] <jtruelove_> so right now our services have a randomly selected initializer thread that deploys the vertx-opentsdb verticle

[22:47:02] <temporal_> is doing the an implementation of SPI metric for Hawkular Metrics

[22:47:24] <temporal_> perhaps you want to have a look ?

[22:47:38] <temporal_> we discussed these kinds of things together

[22:47:39] <jtruelove_> yeah i can, here actually is the untested SPI integration

[22:47:54] <temporal_> and we had same concerns that you share

[22:47:59] <jtruelove_>

[22:48:34] <jtruelove_> because if i do a connection per SPI + a need one for general metrics reproting in an 8 core server i now have 16 connections just to the same opentsdb cluster

[22:48:37] <jtruelove_> for one server

[22:48:41] <temporal_> as far as I remember, the way he does is to batch metrics

[22:48:58] <jtruelove_> yeah if you look at that CR i'm batching in the SPI case

[22:49:20] <jtruelove_> to not be insanely noisy, i added batching support to vertx-opentsdb

[22:49:39] <temporal_> what is CR ?

[22:49:44] <temporal_>

[22:49:56] <jtruelove_> code review

[22:50:12] <jtruelove_> we use gerrit and you can upload patches etc..

[22:50:13] <temporal_> I see you have very similar appraoch

[22:50:17] <jtruelove_> before you merge

[22:50:31] <temporal_> I'm wondering if we should not have a single project with hawkular and opentsdb to share code

[22:50:41] <temporal_> and also perhaps put the dropwizard things

[22:50:53] <jtruelove_> i don't know enough about hawkular, i've only looked at it briefly

[22:51:09] <temporal_> it's an HTTP based reporting server

[22:51:21] <temporal_> I am going to have a quick look at your SPI CR

[22:51:25] <jtruelove_> gotcha opentsdb does HTTP and raw sockets we use raw sockets

[22:51:39] <jtruelove_> it's not 100% done but conceptually the main bits are there

[22:52:15] <jtruelove_> where i ran in to issues was figuring out how to inject the OpenTsDbOptions into the verticles as we directly extend AbstractVerticle

[22:52:57] <temporal_> I think in case of SPI we should not bother with verticle much

[22:53:00] <jtruelove_> looks like how julien works around this is by pushing server startup one more step down and manually loading the verticles at least in the case of vertx-shell

[22:53:20] <temporal_> julien is me

[22:53:28] <jtruelove_> ah :)

[22:53:45] <temporal_> I used the verticle in vertx-shell more to get an Vertx “close” event

[22:53:56] <temporal_> to cleanup things related to a vertx isntance

[22:54:06] <temporal_> in case of SPI you get a proper close from Vertx

[22:54:11] <temporal_> so it should not be needed

[22:54:11] <jtruelove_> ah so my issue is that i only delpoy one vertx-opentsdb instance

[22:54:24] <jtruelove_> the listener that does the reporting

[22:54:35] <temporal_> perhaps you should have a design similar to current clients we have in vertx

[22:54:45] <temporal_> now we tend to make client without verticle

[22:54:55] <temporal_> but provide a service in front of client with verticle

[22:55:03] <temporal_> so this way it works without verticle

[22:55:04] <jtruelove_> that's fine, i'd still only want one event bus listener

[22:55:07] <temporal_> but can work with verticle

[22:55:27] <temporal_> this approach does not prevent bus listener

[22:55:51] <jtruelove_> well it's more the coordination bit of who starts the listener

[22:56:17] <jtruelove_> if SPI clients get wired up for each instance someone still needs to consume the metrics to report them

[22:57:26] <temporal_> yes

[22:57:34] <temporal_> but you can use in this case either multiple clients

[22:58:03] <temporal_> or expose a common client and manage multiplevling in the client

[22:58:17] <jtruelove_> yeah the event bus gets you the decoupling, then you just need to ensure you only start one listener

[22:58:20] <temporal_> you mean each vertx instance ?

[22:58:23] <jtruelove_> yeah

[22:58:46] <temporal_> so in this case I would recommand you to use an own vertx instance

[22:58:55] <temporal_> if you want this

[22:59:08] <temporal_> unless you cluster all the vertx bus in the same VM :-)

[22:59:17] <temporal_> just kissing

[22:59:20] <temporal_> kdding

[22:59:22] <temporal_> sorry :-)

[22:59:24] <jtruelove_> :)

[22:59:28] <temporal_> there is one thing I don't get in the CR

[22:59:44] <temporal_> ScheduledMetrics has collectMetrics abstract

[22:59:59] <temporal_> and TcpMetrics does not implement it

[23:00:07] <temporal_> ah yes it does

[23:00:09] <temporal_> sorry

[23:00:39] <jtruelove_> yeah it's kinda specific to each type just wanted to force it to be written

[23:00:39] <temporal_> funny thing is that tomorrow I'll spend 1/2 day with Thomas (the guy doing hawkular metrics)

[23:00:43] <temporal_> to work with him

[23:00:52] <temporal_> (we live in the same neighboround)

[23:00:57] <jtruelove_> ah cool

[23:01:02] <jtruelove_> where are you based?

[23:01:33] <jtruelove_> i'm west coast

[23:01:49] <temporal_> South coast of France :-)

[23:02:02] <jtruelove_> nice, i've been there once a long time ago

[23:02:12] <temporal_> your code looks good at first sight

[23:02:12] <jtruelove_> you have the rock beaches

[23:02:21] <temporal_> indeed

[23:02:22] <temporal_> Marseille

[23:02:28] <jtruelove_> yeah i need to test it :)

[23:02:30] <temporal_> so tell me what are your actual concerns with SPI ?

[23:02:33] <temporal_> and this

[23:02:41] <temporal_> so I can share tomorrow with Thomas

[23:02:45] <jtruelove_> i want 1 connection per service

[23:02:53] <temporal_> what is a service ?

[23:03:03] <jtruelove_> a jvm in essence

[23:03:12] <temporal_> ok

[23:03:23] <temporal_> so you want to share a single common client for all Vert.x SPI

[23:03:27] <jtruelove_> a service is a distinct server that fulfills some task

[23:04:02] <jtruelove_> yeah and it's okay if each vertx SPI instance in a 'vertx instance' of a verticle has a client

[23:04:09] <jtruelove_> but i'd want them all routing to one listener

[23:04:24] <jtruelove_> that then sends to opentsdb off a single connection

[23:04:38] <jtruelove_> as any given server doesn't need more than one outbound connection to opentsdb

[23:04:58] <temporal_> ok

[23:05:04] <temporal_> I'm looking at how reporting occurs now

[23:05:07] <jtruelove_> an example of a 'service' is say a shopping cart service, this services ie exposes a rest API for interacting with a cart

[23:05:13] <amr> so it seems mongoclient's updateWithOptions() creates an _id field with an ObjectId()

[23:05:16] <amr> whereas save does not

[23:05:19] <temporal_> so publisher goes on the event bus

[23:05:25] <jtruelove_> i run a vertx instance for each core

[23:05:30] <temporal_> I mean ScheduledMetrics send an event on the bus

[23:05:42] <temporal_> ah no

[23:05:43] <jtruelove_> so say box has 8 cores i have 8 vertx intances for the shopping cart verticle

[23:05:45] <temporal_> if (metrics.size() > 0) { publisher.sendMetricBatch(metrics); }

[23:06:11] <temporal_> ah yes itdoes bus

[23:06:44] <temporal_> in your case the bus does two things

[23:06:59] <temporal_> 1/ provide an indirection with the bus address

[23:07:12] <temporal_> 2/ change the vertx context

[23:07:31] <temporal_> it could provide round robin balancing but you want a single instance which means a single consumer

[23:07:54] <jtruelove_> that's right

[23:08:02] <temporal_> so if that's what you need 1+2 I would rather drop the bus and use a singleton

[23:08:03] <jtruelove_> as you only need 1 publisher to opentsdb

[23:08:24] <jtruelove_> a static instance across vertx instances?

[23:08:27] <temporal_> it would simplify and worst case you could still go back to bus approach

[23:08:38] <jtruelove_> i've done that just feels a tad bit dirty

[23:08:49] <jtruelove_> obviously bus comes with its own overhead

[23:09:18] <temporal_> you can have a singleton that remains open until there are vertx instance

[23:09:24] <temporal_> and close it when all vertx are closed

[23:09:27] <jtruelove_> like the current lib works this is just adding SPI so we don't have to instrument all the rest apis manually

[23:09:52] <jtruelove_> hmm is this pattern somewhere or just write it myself?

[23:10:52] <temporal_> look at mongo

[23:10:53] <temporal_>

[23:11:11] <temporal_> it has

[23:11:12] <temporal_> static MongoClient createShared(Vertx vertx, JsonObject config, String dataSourceName)

[23:11:51] <temporal_> it uses an holder to share a common connection between clients

[23:11:59] <temporal_> also it provides a “service” approach

[23:12:01] <temporal_> with event bus

[23:12:08] <temporal_> that reuses this client

[23:12:46] <temporal_> btw you could expose TsDb service as @ProxyGen

[23:14:14] <temporal_> and provide a nice service in front of it

[23:14:24] <temporal_> (now we event have javascript proxies :-) )

[23:15:33] <jtruelove_> yeah i haven't looked at that yet

[23:15:51] <jtruelove_> or messed with the gen stuff i'd like to just didn't know where to start

[23:16:01] <temporal_> I started a new doc for gen stuff

[23:16:09] <temporal_> both for lang implementors

[23:16:18] <temporal_> but soon I should write a polyglot API design guide

[23:16:27] <temporal_> to complement it

[23:25:25] <amr> i guess upsert uses mongodb to generate the _id, whereas .save() catches it before it hits the db

[23:27:54] <jtruelove_> are you going to java one?

[23:30:12] <temporal_> jtruelove_ I don't have the opportunity to go there

[23:33:30] <jtruelove_> bummer

[23:37:55] <temporal_> I've been 10 years ago though

[23:52:11] <jtruelove_> is anyone going from the vertx team?

[23:59:20] <jtruelove_> cool yeah i see how mongo is doing it with a shared ma

[23:59:23] <jtruelove_> p

[23:59:54] <jtruelove_> how do you guarantee that the close gets called to do the ref counting correctly?