How to send request from HTTPoison with proxy in docker

Using micro service isn’t easy for me, unfortunately :frowning_face:. I have a problem to use external api in my services.
Considering I need to connect to the google Recaptcha, but I can’t, because my server is dockeriz.

for example please see these pictures:

1%20(1)

and

nginx_master is my container’s name instead of my container’s IP

Although I used a proxy which is my Nginix container but I have an error, how can I fix it ? by the way if I don’t give internet to the elixir html container, it won’t be able to connect to google Recapcha api, but I need to isolate this without internet, because there is a nginx proxy.

and my second problem is how I can use external api in my private microservices behind my api gate way. I don’t want to give an internet to the private container and I need to use a safe proxy.

Thank you

1 Like

Uh, I don’t think nginx can work as a socks proxy, at least I’ve never tried? Why can’t it access straight out or through a tunnel, or reverse proxy recaptcha or so?

EDIT: Apparently there is a socks plugin for nginx, did you compile it in? It looks like it might be a bit out of date too…

1 Like

i’m confused, if I don’t give internet to the elixir container, it can’t connect to google api. if I give it to the elixir container, may it have a security problem because it can’t be isolate? what is your suggestion, generally?

I don’t know why it does!!!

Well if it needs internet access then that needs to be supplied somehow. A single outgoing port and then binding to it for your outgoing connections (or a pool of them) seems perfectly sufficient since no incoming connections would be allowed? Or if you need an incoming connection too then a single port for that or whatever. :slight_smile:

1 Like

if I understood what you said, I should create a firewall rule to filter this container internet , shouldn’t I ? if your answer is okay, would you mind telling me how I can do this ? because I tried to create iptables rule to filter and limit to just Google Api, but I couldn’t !!

my docker network was linked by internal domain like: http://elixir_api or http://elixir_html, how can I filter its internet for example http://elixir_html, that it just send and receive from google not another domain

Thanks

Docker can do it all fine already, just expose the specific port you want to listen in for incoming connections (if any) and just let it communicate ‘out’ via a bridge network to the host. :slight_smile:

Are you actually wanting to restrict it to specific domains only? A reverse proxy would be safer there.

1 Like

Thanks, I set rule in iptables, but I couldn’t find anything about you told me, I can expose port from docker but I cant set up in nginx, how can I do? would you mind giving me a reference or tutorial?

Well what are the very specific requirements you are needing first?

Do you need incoming connections, outgoing connections, both? Do they need to be restricted to very specific sets of IP’s, domains, data, etc…? Etc…

1 Like

I do not have much information, but I just need to filter my internet that it is forced to load google not another site or ip . I need incoming connections and outgoing connections, both, but just from google recaptcha api, I want my internet to be filtered to load , Only Google ip or api domain .

can I describe my need well?

That then begs the question, ‘what’ is making the remote API call, is it your own controlled code or some untrusted user code?

1 Like

User send me a Recapcha code that google js file creates and then I send this code to the google api

the code sender:

  def google_post_recap(root_link, params) do
    body = "secret=#{params["secret"]}&response=#{params["response"]}"
    HTTPoison.post(
      "#{root_link}",
       body,
        [
          {"Content-Type", "application/x-www-form-urlencoded"},
          {"Accept", "text/html"}
        ],
       [timeout: 50_000, recv_timeout: 50_000]
    )
  end

and

  # login sender
  def login_register_sender(conn, %{"email" => _email, "password" => _password}  = info_of_login) do

    ip_os = TrangellHtmlSite.ip_os(conn)
    info_map = User.login_mapper(info_of_login)
    with {:ok, %HTTPoison.Response{status_code: 200, body: body_of_token}} <- TrangellHtmlSite.User.google_recap(info_of_login["g-recaptcha-response"]),
    %{"challenge_ts" => _time, "hostname" => _host, "success" => true} <- Jason.decode!(body_of_token),
    {:ok, %HTTPoison.Response{status_code: 200, body: _user_info}} <- User.login_sender(info_map, ip_os.ip, ip_os.os) do

      conn
      |> put_flash(:info, "کد فعال سازی دو مرحله ای برای شما ایمیل گردید لطفا به ایمیل خود مراجعه کرده و در فرم زیر کد مذکور به همراه ایمیل خود وارد کنید")
      |> redirect( to: "/verify-code")
    end
  end

  # google
  def google_recap(response) do
    TrangellHtmlSite.google_post_recap("https://www.google.com/recaptcha/api/siteverify", %{"secret" => "MY secret code", "response" => response}) 
    |> IO.inspect
  end

after those codes google send me a Json that I use like this

{:ok,
 %HTTPoison.Response{
   body: "{\n  \"success\": true,\n  \"challenge_ts\": \"2018-10-31T20:38:43Z\",\n  \"hostname\": \"localhost\"\n}",
   headers: [
     {"Content-Type", "application/json; charset=utf-8"},
     {"Date", "Wed, 31 Oct 2018 20:40:01 GMT"},
     {"Expires", "Wed, 31 Oct 2018 20:40:01 GMT"},
     {"Cache-Control", "private, max-age=0"},
     {"X-Content-Type-Options", "nosniff"},
     {"X-XSS-Protection", "1; mode=block"},
     {"Server", "GSE"},
     {"Alt-Svc", "quic=\":443\"; ma=2592000; v=\"44,43,39,35\""},
     {"Accept-Ranges", "none"},
     {"Vary", "Accept-Encoding"},
     {"Transfer-Encoding", "chunked"}
   ],
   request_url: "https://www.google.com/recaptcha/api/siteverify",
   status_code: 200
 }}

all of my code is that, I think that user sent code can be untrusted but google decides to verify or unverify

Oh just a simple API confirmation call, yeah just let it get access to the internet, just hardcode the response address and all and you are good.

1 Like

Thanks, I just have a question, if my container generally have access to internet will I have a problem?
answering this maybe help me to do safety my project generally, it goes without saying that I have only one external api in my project, and may my user be able to send malicious request to my elixir project without my nginix directly ? special if I use external api in my private api behind my api gateway? it is clear that it is just a question to prevent bad way to improve my project.

by the way I filtered this container by iptables and interface and subnet:

for example:

sudo docker network create --internal mynetwork_html

sudo docker network inspect mynetwork_html | grep Subnet

"Subnet": "192.168.100.0/20"

sudo iptables -S | grep 192.168.100.0 | cut -d" " -f7 | tail -n1
br-11d6257150db

network.yml

container_html:
    networks:
      - html

networks:   
  html:
    external: true
    name: mynetwork_html

for test:

sudo docker exec -it container_html ash
ping 8.8.8.8
ping google.com

ping yahoo.com

google ip range:

dig -t TXT _netblocks.google.com 
; <<>> DiG 9.10.3-P4-Debian <<>> -t TXT _netblocks.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42472
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;_netblocks.google.com.		IN	TXT

;; ANSWER SECTION:
_netblocks.google.com.	872	IN	TXT	"v=spf1 ip4:35.190.247.0/24 ip4:64.233.160.0/19 ip4:66.102.0.0/20 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:74.125.0.0/16 ip4:108.177.8.0/21 ip4:173.194.0.0/16 ip4:209.85.128.0/17 ip4:216.58.192.0/19 ip4:216.239.32.0/19 ~all"

;; Query time: 34 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Oct 31 18:50:19 +0330 2018
;; MSG SIZE  rcvd: 286

and

vi mynetwork

sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 35.190.247.0/24 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 64.233.160.0/19  ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 66.102.0.0/20 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 66.249.80.0/20 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 72.14.192.0/18 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 74.125.0.0/16 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 108.177.8.0/21  ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 173.194.0.0/16 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 209.85.128.0/17  ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 216.58.192.0/19  ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 216.239.32.0/19 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/20 -d 8.8.8.8 ! -o br-11d6257150db -j MASQUERADE
sudo iptables -t nat -A DOCKER -i br-11d6257150db -j RETURN

sudo iptables -A FORWARD -o br-11d6257150db -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -o br-11d6257150db -j DOCKER
sudo iptables -A FORWARD -i br-11d6257150db ! -o br-11d6257150db -j ACCEPT

sudo iptables -D DOCKER-ISOLATION-STAGE-1 ! -s 192.168.100.0/20 -o br-11d6257150db -j DROP
sudo iptables -D DOCKER-ISOLATION-STAGE-1 ! -d 192.168.100.0/20 -i br-11d6257150db -j DROP

sudo iptables -A DOCKER-ISOLATION-STAGE-1 -i br-11d6257150db ! -o br-11d6257150db -j DOCKER-ISOLATION-STAGE-2
sudo iptables -A DOCKER-ISOLATION-STAGE-2 -o br-11d6257150db -j DROP


chmod  u+x  mynetwork
./mynetwork

Depends on what you run in it.

Has nothing to do with this, outgoing and incoming connections are exceedingly different, incoming has to be expose'd, outgoing does not.

1 Like