This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[00:20:13] <ChicagoJohn> is their a simple way to 'wait' while all of your verticles are deployed?

[00:20:49] <ChicagoJohn> if i try to run tests against two verticles, i cant guarantee they are up before the code begins to test

[00:21:02] <temporal_> you can use an intermediate verticle that does that

[00:21:10] <temporal_> ah it will be the same I think

[00:21:16] <temporal_> try use CompositeFuture

[00:21:20] <temporal_> with all

[00:21:29] <ChicagoJohn> ok im kind of doing that already

[00:21:34] <ChicagoJohn> im using vertx-when

[00:21:34] <temporal_> make a list of futures that will resolve each deployment

[00:21:50] <temporal_> if you use vertx unit you can use Async

[00:21:55] <temporal_> with a number

[00:22:08] <ChicagoJohn> when.all(verticles).then(deploymentIDs → { context.asyncAssertSuccess(); });

[00:22:23] <temporal_> rather do

[00:23:00] <ChicagoJohn> should i use 'final Async async = context.async(2);' ?

[00:23:10] <temporal_> Async async = contxt.async(n);

[00:23:12] <temporal_> yes

[00:23:18] <temporal_> and then in each verticle

[00:23:19] <temporal_> you do

[00:23:34] <temporal_> async.complete()

[00:23:37] <temporal_> it's like a countdownlatch

[00:23:40] <ChicagoJohn> verticles.add(whenVertx.deployVerticle(LoginAPI.class.getName(), options, async.complete()));

[00:23:43] <ChicagoJohn> something like that

[00:23:45] <ChicagoJohn> ?

[00:23:57] <temporal_> yes

[00:24:02] <temporal_> you need to handle the Error though

[00:24:25] <temporal_> so something like

[00:24:27] <ChicagoJohn> ?

[00:24:37] <temporal_> if (error) { fail } else { ayns.complete()) }

[00:24:50] <temporal_> there is no one magic silver bullet to all async cases

[00:25:22] <temporal_> ideally an Async object should perhaps have been a Vert.x Future

[00:25:34] <temporal_> will do that for next version of Vert.x Unit I think

[00:25:35] <temporal_> (and break all users :-) )

[00:25:36] <ChicagoJohn> so in the handler, make sure to check for deployment erros

[00:26:28] <ChicagoJohn> btw, i would suggest you have someone go over the vertx unit testing doc so it covers vertx-unit

[00:26:40] <temporal_> acutally you shoud call

[00:26:42] <temporal_> countdown()

[00:26:44] <temporal_> and not complete

[00:26:56] <ChicagoJohn> you are right. complete strong arms the whole thing

[00:26:58] <temporal_> ChicagoJohn what do you mean ?

[00:27:38] <ChicagoJohn> my mistake, i keep pulling up an old blog post: http://vertx.io/blog/unit-and-integration-tests/

[00:28:33] <ChicagoJohn> also hi, im an idiot. and i found the unit testing doc…. http://vertx.io/docs/vertx-unit/java/

[00:29:23] <ChicagoJohn> on an unrelated note, I FINALLY cracked the sequentially dependent rxjava http call issue ive been working on

[00:30:15] <ChicagoJohn> i had to wrap each vertx.createHttpClient() in an Observable.create()

[00:30:43] <ChicagoJohn> and then appropriatelly send the response through the subscriber's onNext()

[00:31:21] <ChicagoJohn> now i can easily pipe responses into future responses

[00:31:51] <ChicagoJohn> er…. request Y depends on response of X

[00:41:19] <ChicagoJohn> when starting up multiple verticles, this is pretty standard right:

[00:41:20] <ChicagoJohn> vertx.createHttpServer().requestHandler(router::accept).listen(config().getInteger(“http.port”), next::handle);

[00:41:38] <ChicagoJohn> where all verticales are listening on the same port. right?

[00:41:56] <ChicagoJohn> i could have sworn i had this code working last time i poked at it

[00:42:49] <ChicagoJohn> if i deploy 1 verticle, my endpoint gets hit. if i deploy 2 verticles with non overlapping endpoints, nothing gets hit…

[09:15:21] <whitenexx> hi

[10:07:48] <amr> web

[10:07:49] <amr> oops

[16:47:08] <AlexLehm> bintray tells me the new vertx version is out. Yeh!

[16:54:31] <tsegismont> :)

[18:34:02] <ChicagoJohn> does anyone have a good example of a a 'main verticle' deployment?

[18:34:15] <ChicagoJohn> one that deploys multiple verticles and logs it?

[18:43:32] <temporal_> AlexLehm yes, finally some rest ?

[18:47:20] <AlexLehm> for the project you mean?

[22:06:03] <temporal_> AlexLehm for me :-)

[22:06:37] <AlexLehm> well, you deserved it

[22:06:43] <AlexLehm> and earned it :-)