So I'm also interviewing for the position of Partner Engineer at Google's Cloud business. The position is in Google's Switzerland office. This was scheduled more than a month ago, and the earliest date was January 15th. So I have been waiting for a while. The recruiter contacted me in November for the position and I responded in early December, then he went on holidays. I shot him some questions asking what the process would be like. And he informed me earlier this month what was involved. He told me there would be a troubleshooting scenario and some code debugging, in other words, some programming. Turns out what happened was, the interviewer from Poland asked me an operations scenario and a systems design question. Apparently according to a recruiter, only 1 out of 10 pass the first round of phone interviews. Call me a sadist, but I'm still a sucker for interviews with tech companies as they are much more challenging than other companies. Amazingly I think this is the 4th or 5th time I have interviewed with them. My first time was in 2007. I've come a very long way since.
Round 1
I had the interview scheduled at 9pm. Which is pretty late. And it was only supposed to last 45 minutes. But in the end it lasted 1.5 hours. The interviewer introduced himself and asked me for my background. He seemed friendly and was Polish. He was actually interviewing me from a hotel! I pretty much told him all I did in Sweden and more recently Dubai, but it involved only one word, cloud. And everything cloud related, so this includes migrating on-premise solutions to the cloud, building pipelines in the cloud, migrating pipelines, builiding microservices in the cloud, setting up monitoring, working on security aspects, upskilling team members, and so on and so forth. His first question was "Tell me something that was challenging to you". To this I responded migrating one of the services, an on-premise PDF report generator to AWS (It's fine to use AWS... even though they are Google). I told him about the initial ideas we had, that is to use lambda, but turns out it wasn't the right solution as lambda had strict constraints. He asked how would I do that differently now, and to which I responded, depending on the requirements and constraints, there are some things which you can do and can't do. And I mentioned some aspects of security, like how we had to contact IT to open up ports so we could access services in AWS. After that he asked me a scenario where you are given an application like wordpress, and you have a short period of time, what 3 things would you monitor? I realised now I didn't ask him what a "short period of time" was, but I did make some assumptions. I mentioned business continuity is the most important of all, so monitoring of the parent process (like the http daemon) should be done all the time, and if it crashes, it should be restarted, this can be done using cron. And also CPU and disk space should be monitored. For example if the metric exceeds 80%, alerts have to be sent to dev and/or operations. He followed up with ok let's assume the 80% alert for disk space got hit, what should be done now to ensure a successful migration? I spoke about ensuring a good user experience, and assuming this was a monolith, you would redirect the index page to a temporary page. Then assuming you had some downtime between say 1am to 2am, you would remove the hard drive and clone it (data copy or sector by sector clone) to a larger hard drive. I mentioned the new hard drive should be double the disk space. This is because, assuming the traffic to the website consumed disk in a linear fashion, then the traffic would literally have to double again to consume the same 80% of disk space. He mentioned how would I verify that the migration would work, I said you have to run all the scripts again and also visit the website's domain just as a sanity check. I also mentioned, that if you're doing a data copy you probably do not have to resize the partition, but if you're doing a sector by sector clone, you will have to resize the partition. He seemed happy with my answers, and proceeded to ask me a question about designing an application where end users will be alerted on their phones when their friends are within 300m of them. And how would this system integrate with other systems, like facebook. I'm a big fan of systems design questions now, since I work a lot with architecture. I mentioned I would adopt an API first approach, everything should communicate using REST endpoints and using TLS. I made an assumption saying all your friends are in Europe, so you can deploy your service in an EC2 instance as part of an ECS cluster, and deployed in at least 2 availability zones in Europe, fronted by a load balancer. This is the high availability aspect. Why Europe, because it will provide the best experience for the users, i.e. the lowest latency. I mentioned how often will the users be alerted and assumed that maybe once a day, as if a friend has 100 friends that means 10000 requests have to be performed, although you can optimise that, it depends on how you design the API. With regards to database storage, I mentioned that you can use NoSQL, and DynamoDB provides that, and you can scale depending on the reads and writes that you require. It was at this point he mentioned we're out of time, but asked if I had more time as he wanted to listen more. I was only too happy to oblige. I continued with talking about providing real time communication to the user, so you use websockets to do that. Also mentioned that the user will have to provide permission with regards to providing their geolocation information. His last question was asking me what should the application send to the system's backend for each request (in other words, what headers), and I mentioned first and foremost, TLS should be used, and things like userid, security token, correlation id, browser, IP address, geolocation, version of the application, map version. At this point he asked how can I be sure the information is up to date? And I realised of course, you also need the timestamp! (He helpfully guided me along here with his question) All this information is needed for troubleshooting, and business reasons. You don't want to log something unnecessary. I also mentioned that data protection laws are omitted in this consideration. His very very last question was why would you pick EC2? And to that I answered, ec2 is easier to think about, since it's a virtual machine. Serverless has restrictions, and it's more of a learning curve. I also mentioned containers, and if you have something contained running as a microservice you can easily move between serverless and VMs in the future. If you have a long running task for example, you can run that in a VM. This is all depending on your metrics. If you have a short running task or something that does not run that often, you can run Lambda, or Fargate. That was it, I think he seemed satisfied with my answers. To be fair I wasn't expecting this question but since I'm quite fresh working off the AWS interview I still have this all in my head.
So after an hour or so it was over and he said I could ask him questions. In the end I asked him about half an hour's worth, including his background. He had been in Google for 2.5 years now and previously was in viacom and some other Polish companies. I had a good time talking to him and hopefully he did as well, otherwise why would you spend double the interview time with someone you know dislike right? At this point it was already 10:30pm and my brain was already only half working.
Anyway fingers crossed, I guess I'll hear back sometime soon from them. Still waiting to hear back from Amazon though. Wonder if my chance was blown? Regardless it definitely was a fun experience. The disk cloning thing, that I haven't done in a while, I'm glad I have a vague idea of what to do though, in case I ever need to do a disk migration.
Tomorrow I have a last minute interview scheduled at HP. It's actually DXC, a spinoff. I had a look at their glassdoor reviews and it didn't impress me whatsoever. But let's see anyway. I need the interview experience.
Round 1
I had the interview scheduled at 9pm. Which is pretty late. And it was only supposed to last 45 minutes. But in the end it lasted 1.5 hours. The interviewer introduced himself and asked me for my background. He seemed friendly and was Polish. He was actually interviewing me from a hotel! I pretty much told him all I did in Sweden and more recently Dubai, but it involved only one word, cloud. And everything cloud related, so this includes migrating on-premise solutions to the cloud, building pipelines in the cloud, migrating pipelines, builiding microservices in the cloud, setting up monitoring, working on security aspects, upskilling team members, and so on and so forth. His first question was "Tell me something that was challenging to you". To this I responded migrating one of the services, an on-premise PDF report generator to AWS (It's fine to use AWS... even though they are Google). I told him about the initial ideas we had, that is to use lambda, but turns out it wasn't the right solution as lambda had strict constraints. He asked how would I do that differently now, and to which I responded, depending on the requirements and constraints, there are some things which you can do and can't do. And I mentioned some aspects of security, like how we had to contact IT to open up ports so we could access services in AWS. After that he asked me a scenario where you are given an application like wordpress, and you have a short period of time, what 3 things would you monitor? I realised now I didn't ask him what a "short period of time" was, but I did make some assumptions. I mentioned business continuity is the most important of all, so monitoring of the parent process (like the http daemon) should be done all the time, and if it crashes, it should be restarted, this can be done using cron. And also CPU and disk space should be monitored. For example if the metric exceeds 80%, alerts have to be sent to dev and/or operations. He followed up with ok let's assume the 80% alert for disk space got hit, what should be done now to ensure a successful migration? I spoke about ensuring a good user experience, and assuming this was a monolith, you would redirect the index page to a temporary page. Then assuming you had some downtime between say 1am to 2am, you would remove the hard drive and clone it (data copy or sector by sector clone) to a larger hard drive. I mentioned the new hard drive should be double the disk space. This is because, assuming the traffic to the website consumed disk in a linear fashion, then the traffic would literally have to double again to consume the same 80% of disk space. He mentioned how would I verify that the migration would work, I said you have to run all the scripts again and also visit the website's domain just as a sanity check. I also mentioned, that if you're doing a data copy you probably do not have to resize the partition, but if you're doing a sector by sector clone, you will have to resize the partition. He seemed happy with my answers, and proceeded to ask me a question about designing an application where end users will be alerted on their phones when their friends are within 300m of them. And how would this system integrate with other systems, like facebook. I'm a big fan of systems design questions now, since I work a lot with architecture. I mentioned I would adopt an API first approach, everything should communicate using REST endpoints and using TLS. I made an assumption saying all your friends are in Europe, so you can deploy your service in an EC2 instance as part of an ECS cluster, and deployed in at least 2 availability zones in Europe, fronted by a load balancer. This is the high availability aspect. Why Europe, because it will provide the best experience for the users, i.e. the lowest latency. I mentioned how often will the users be alerted and assumed that maybe once a day, as if a friend has 100 friends that means 10000 requests have to be performed, although you can optimise that, it depends on how you design the API. With regards to database storage, I mentioned that you can use NoSQL, and DynamoDB provides that, and you can scale depending on the reads and writes that you require. It was at this point he mentioned we're out of time, but asked if I had more time as he wanted to listen more. I was only too happy to oblige. I continued with talking about providing real time communication to the user, so you use websockets to do that. Also mentioned that the user will have to provide permission with regards to providing their geolocation information. His last question was asking me what should the application send to the system's backend for each request (in other words, what headers), and I mentioned first and foremost, TLS should be used, and things like userid, security token, correlation id, browser, IP address, geolocation, version of the application, map version. At this point he asked how can I be sure the information is up to date? And I realised of course, you also need the timestamp! (He helpfully guided me along here with his question) All this information is needed for troubleshooting, and business reasons. You don't want to log something unnecessary. I also mentioned that data protection laws are omitted in this consideration. His very very last question was why would you pick EC2? And to that I answered, ec2 is easier to think about, since it's a virtual machine. Serverless has restrictions, and it's more of a learning curve. I also mentioned containers, and if you have something contained running as a microservice you can easily move between serverless and VMs in the future. If you have a long running task for example, you can run that in a VM. This is all depending on your metrics. If you have a short running task or something that does not run that often, you can run Lambda, or Fargate. That was it, I think he seemed satisfied with my answers. To be fair I wasn't expecting this question but since I'm quite fresh working off the AWS interview I still have this all in my head.
So after an hour or so it was over and he said I could ask him questions. In the end I asked him about half an hour's worth, including his background. He had been in Google for 2.5 years now and previously was in viacom and some other Polish companies. I had a good time talking to him and hopefully he did as well, otherwise why would you spend double the interview time with someone you know dislike right? At this point it was already 10:30pm and my brain was already only half working.
Anyway fingers crossed, I guess I'll hear back sometime soon from them. Still waiting to hear back from Amazon though. Wonder if my chance was blown? Regardless it definitely was a fun experience. The disk cloning thing, that I haven't done in a while, I'm glad I have a vague idea of what to do though, in case I ever need to do a disk migration.
Tomorrow I have a last minute interview scheduled at HP. It's actually DXC, a spinoff. I had a look at their glassdoor reviews and it didn't impress me whatsoever. But let's see anyway. I need the interview experience.
No comments:
Post a Comment