Snippets Collections
To build a crypto wallet like Trust Wallet, you need to develop a secure, user-friendly mobile application that supports multiple cryptocurrencies, private key management, and seamless integration with decentralized apps (dApps). But starting from scratch can be time-consuming and expensive. A faster and more efficient way is to use a Trust Wallet clone script,a ready-made, customizable solution that replicates the core features of Trust Wallet.

For more info- ttps://www.alwin.io/trust-wallet-clone-script
916382359432

**var statment**;
**left --> right**;
**most complete to least complete**;
**everything to the left is included in the model**;
Proc MI Data=analysis nimpute=50 out=project.analysisMI seed=07112022;
   Class HFRS_cat Charlson_cat Elixhauser_cat
         sexf rural_residence
         ACS_cat PCI CABG
         polypharmacy
         SES
         inhospital_death mortality30d episode_LOSgt10 discharge_inst
         outcome_death
         outcome_AKI outcome_readmit1y outcome_ACSreadmit1y outcome_revasc1y outcome_MajorBleed1y outcome_StrokeTIA1y outcome_MACE1y
         uptake1y_statins uptake1y_BB uptake1y_ACEi uptake1y_ARB uptake1y_P2Y12 outcome_GDMT1y
         hypoalbuminemia lab_anemia
         APPROACH_dyslipedimia smoking  BMIcat
         episode_HF episode_PulEdema episode_CardiogenicShock episode_CardiacArrest 
         Prior_MI elix_diab  elix_hyper
         elix_PVD elix_CHF baseline_AF elix_renal
         elix_liver charlson_COPD charlson_dementia elix_cancer
         PriorCath PriorPCI PriorCABG
         STsegment elevated_cardiac_markers
         Killip
         GRACE_cat
         ;
   fcs discrim( / ClassEffects=include details) reg( / details);
   var HFRS_cat Charlson_cat Elixhauser_cat
       age_admission sexf rural_residence
       ACS_cat PCI CABG
       polypharmacy
       episode_HF episode_PulEdema episode_CardiogenicShock episode_CardiacArrest 
       Prior_MI elix_diab  elix_hyper
       elix_PVD elix_CHF baseline_AF elix_renal
       elix_liver charlson_COPD charlson_dementia elix_cancer
       SES
       inhospital_death mortality30d episode_LOS episode_LOSgt10 discharge_inst
       outcome_death outcome_death_time
       outcome_AKI outcome_readmit1y outcome_ACSreadmit1y outcome_revasc1y outcome_MajorBleed1y outcome_StrokeTIA1y outcome_MACE1y
       uptake1y_statins uptake1y_BB uptake1y_ACEi uptake1y_ARB uptake1y_P2Y12 outcome_GDMT1y
       hypoalbuminemia lab_anemia
       APPROACH_dyslipedimia smoking  BMIcat
       PriorCath PriorPCI PriorCABG
       baseline_creatinine_mg_dL elevated_cardiac_markers STsegment
       PresysBP PreHR_bpm 
       Killip
       GRACE_cat
       ;
run;
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": " :x-connect: What's On in Brisbane! :x-connect:  ",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Good morning Brisbane! Please see below for what's on this week."
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-9: Monday, 23rd June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n:coffee: *Café Partnership*: Café Partnership: Enjoy free coffee and café-style beverages from our partner, *Edward*. \n\n :lunch: *Lunch*: from *12pm* in the kitchen."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-25: Wednesday, 25th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":coffee: *Café Partnership*: Café Partnership: Enjoy coffee and café-style beverages from our partner, *Edward*. \n\n :late-cake: *Morning Tea*: from *10am* in the kitchen."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-27: Friday, 27th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":party: Social Happy Hour from 3.00pm: Wind down drinks and nibbles with your work pals."
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "divider"
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Stay tuned to this channel for more details, check out the <https://calendar.google.com/calendar/u/0?cid=Y19uY2M4cDN1NDRsdTdhczE0MDhvYjZhNnRjb0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t|*Brisbane Social Calendar*>, and get ready to Boost your workdays!\n\nLove,\nWX Team :party-wx:"
			}
		}
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":x-connect: Xero Boost Days! :x-connect:"
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Good morning Sydney! Please see below for what's on next week as we have a few changes to the Boost Programme. Also see the:thread: if you need to book a workstation on Level 1. "
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-23: Monday, 23th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n:coffee: *Café Partnership*: Café Partnership: Enjoy free coffee and café-style beverages from our partner, *Naked Duck*. \n\n :breakfast: *Lunch*: from *12.00pm* in the All Hands Space."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-26: Wednesday, 25th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":coffee: *Café Partnership*: Café Partnership: Enjoy coffee and café-style beverages from our partner, *Naked Duck*.\n\n :breakfast: *Breakfast*: from *9.00am* in the All Hands Space."
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "divider"
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Stay tuned to this channel for more details, check out the <https://calendar.google.com/calendar/u/0/r?cid=Y185aW90ZWV0cXBiMGZwMnJ0YmtrOXM2cGFiZ0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t|*Sydney Social Calendar*>, and get ready to Boost your workdays!\n\nLove,\nWX Team :party-wx:"
			}
		}
	]
}
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":x-connect: Boost Days - What's on for this week :x-connect:"
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n\n Good morning Melbourne, hope you all had a wonderful weekend :smile: See below for what's in store this week as there is a change to our Boost Lunch Location: "
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": "Xero Café :coffee:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n :new-thing: *This week we are bringing back the yummy cookies and slices. * \n\n  :coffee: *Weekly Café Special:* _ Hazelnut Latte_"
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": " Wednesday, 25th June :calendar-date-25:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": " \n\n :lunch: *Italian themed Lunch*: Lunch is from *12pm* in the *Level-1 and Level-2 Kitchen* Lunch menu in:thread: \n \n :resilience-project: *Level 3 Breakout Space* :resilience-project: \n\n As part of our 2015 wellbeing initiative, *Belinda Tan & Sahar Zamanie* are excited to once again partner with *The Resilience Project* to Launch The *Authentic Connection Program*. This inspiring program explores how Vulnerability, imperfection, and passion can help us build deeper, more meaningful relationships with others and ourselves.\n\n  Grab your Lunch & Join us at 12.15pm, as there will be a special preview featuring a powerful 1 hour video presentation by Hugh Van Cuylenburg "
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": "Thursday, 26th June, :Calendar-date-26:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":eggs: *Breakfast*: from *8:30am-10:30am* in the Wominjeka Breakout Space.   \n\n :massage: Massage Services from 9.00am in the Wellbeing Room on Level-1. \n :pizza: Pizza and an Italian Disco playlist from DJ Anya   \n\n  "
			}
		},
		{
			"type": "divider"
		}
	]
}
A crypto exchange clone script is a software solution modeled on successful crypto exchanges like Binance, Coinbase, or Kraken. These scripts serve as pre-built templates containing the core functionalities needed for launching a trading platform.

Hivelance has earned its reputation as a premier crypto exchange clone script development company by delivering more number of high-quality blockchain projects. With a decade of experience and a deep understanding of market dynamics, we build platforms that combine performance, security, and innovation.



Know More:

Visit - https://www.hivelance.com/crypto-exchange-clone-script
WhatsApp - +918438595928, +971505249877
Telegram - Hivelance
Mail - sales@hivelance.com
.ginput_container_checkbox {
    .gchoice {
      margin-bottom: 1rem;

      @media (min-width: $md) {
        margin-bottom: 0;
      }

      label {
        position: relative;
        right: -4px;

        @media (min-width: $md) {
          right: -8px;
        }


        &::before {
          position: absolute;
          content: ' ';
          width: 16px;
          height: 16px;
          border: 1px solid $elgray;
          background: $lwhite;
          top: 3px;
          left: -20px;
          border-radius: 5px;

          @media (min-width: $md) {
            width: 20px;
            height: 20px;
            top: 3px;
            left: -26px;
          }
        }

        &::after {
          content: "";
          position: absolute;
          border-bottom: 2px solid $elgray;
          border-left: 2px solid $elgray;
          height: 6px;
          left: -16px;
          opacity: 0;
          top: 7px;
          transform: rotate(-45deg);
          transition: all .3s linear;
          width: 9px;

          @media (min-width: $md) {
            top: 9px;
            width: 10px;
            left: -21px;
          }
        }
      }

      input {
        opacity: 0;
      }

      input[type="checkbox"]:checked~label:after {
        opacity: 1 !important;
      }
    }
  }

  .ginput_container_radio {
    .gchoice {
      margin-bottom: 1rem;

      @media (min-width: $md) {
        margin-bottom: 0;
      }

      label {
        position: relative;
        right: -4px;

        @media (min-width: $md) {
          right: -8px;
        }

        &::before {
          position: absolute;
          content: ' ';
          width: 16px;
          height: 16px;
          border: 2px solid $primary;
          background: $white;
          top: 3;
          left: -20px;
          border-radius: 50%;

          @media (min-width: $md) {
            width: 20px;
            height: 20px;
            top: 3px;
            left: -26px;
          }
        }

        &::after {
          content: "";
          position: absolute;
          width: 8px;
          height: 8px;
          left: -16px;
          opacity: 0;
          top: 7px;
          border-radius: 50%;
          background: $primary;
          transition: all .3s linear;

          @media (min-width: $md) {
            width: 10px;
            height: 10px;
            top: 8px;
            left: -21px;
          }
        }
      }

      input {
        opacity: 0;
      }

      input[type="radio"]:checked~label:after {
        opacity: 1 !important;
      }
    }
  }
SELECT count(*) AS tot_rows
, sum(IF((category IS NULL OR category = '' OR category = ' '), 1, 0)) AS null_category_rows 
, sum(IF((status IS NULL OR status = '' OR status = ' '), 1, 0)) AS null_status_rows 
, sum(IF((txn_id IS NULL OR txn_id = '' OR txn_id = ' '), 1, 0)) AS null_txn_id_rows 
FROM switch.txn_info_snapshot_v3 
WHERE dl_last_updated BETWEEN DATE'2025-05-01' AND DATE'2025-05-31';

SELECT count(*) AS tot_rows
, sum(IF((amount IS NULL), 1, 0)) AS null_amount_rows 
, sum(IF((participant_type IS NULL OR participant_type = '' OR participant_type = ' '), 1, 0)) AS null_participant_type_rows 
, sum(IF((scope_cust_id IS NULL AND participant_type = 'PAYER'), 1, 0)) AS null_scope_cust_id_rows 
, sum(IF((txn_id IS NULL OR txn_id = '' OR txn_id = ' '), 1, 0)) AS null_txn_id_rows 
, sum(IF((vpa IS NULL OR vpa = '' OR vpa = ' '), 1, 0)) AS null_vpa_rows 
FROM switch.txn_participants_snapshot_v3 
WHERE dl_last_updated BETWEEN DATE'2025-05-01' AND DATE'2025-05-31';

SELECT count(*) AS tot_rows 
, sum(IF((evaluationType IS NULL OR evaluationType = '' OR evaluationType = ' '), 1, 0)) AS null_evaluationType_rows 
, sum(IF((latitude IS NULL OR latitude = '' OR latitude = ' '), 1, 0)) AS null_latitude_rows 
, sum(IF((longitude IS NULL OR longitude = '' OR longitude = ' '), 1, 0)) AS null_longitude_rows 
, sum(IF((osVersion IS NULL OR osVersion = '' OR osVersion = ' '), 1, 0)) AS null_osVersion_rows 
, sum(IF((payeeType IS NULL OR payeeType = '' OR payeeType = ' '), 1, 0)) AS null_payeeType_rows 
, sum(IF((payeeVpa IS NULL OR payeeVpa = '' OR payeeVpa = ' '), 1, 0)) AS null_payeeVpa_rows 
, sum(IF((payerType IS NULL OR payerType = '' OR payerType = ' '), 1, 0)) AS null_payerType_rows 
, sum(IF((payerVpa IS NULL OR payerVpa = '' OR payerVpa = ' '), 1, 0)) AS null_payerVpa_rows 
, sum(IF((cst_risk_code IS NULL OR cst_risk_code = '' OR cst_risk_code = ' '), 1, 0)) AS null_cst_risk_code_rows 
, sum(IF((action_recommended IS NULL OR action_recommended = '' OR action_recommended = ' '), 1, 0)) AS null_action_recommended_rows  
FROM
(SELECT txnid
, lower(regexp_replace(cast(json_extract(request,	'$.evaluationType') as varchar), '"', '')) AS evaluationType
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.latitude') as varchar), '"', '')) AS latitude
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.longitude') as varchar), '"', '')) AS longitude
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.osVersion') as varchar), '"', '')) AS osVersion
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.payeeType') as varchar), '"', '')) AS payeeType
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.payeeVpa') as varchar), '"', '')) AS payeeVpa
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.payerType') as varchar), '"', '')) AS payerType
, lower(regexp_replace(cast(json_extract(request,	'$.requestPayload.payerVpa') as varchar), '"', '')) AS payerVpa
, regexp_replace(cast(json_extract(response, '$.messages.cst[0]') as varchar), '"', '') AS cst_risk_code
, json_extract_scalar(response, '$.action_recommended') AS action_recommended
FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3 
WHERE dl_last_updated BETWEEN DATE'2025-05-01' AND DATE'2025-05-31');
Select * from attlog where employeeID = 275;


UPDATE attlog
SET personName = 'Parveen Naik'
WHERE employeeID = 275;

#INPUT NUMBER OF APPLES WANNA BUY
Apples = 7
#COST PER APPLE
Cost = 3
#THIS IS YOUR MONEY
Money = 50
MoneyNeeded = Apples * Cost

if MoneyNeeded > 20:
    MoneyNeeded = ( Apples * Cost ) - 5
else: MoneyNeeded = Apples * Cost

print("                                        ____________")
print("                                       |            |")
print("                                       | Apple Shop |")
print("                                       |____________|")
print()
print("                                         NEW DEAL:")
print("                                SPEND > $20 WE TAKE $5 OFF")
print("")
print("                                MoneyNeeded & Apples Bought")
print("                                      ---------------")
print("                                           $",MoneyNeeded)
print("                                        ", Apples, "Apples")

print("                                      ---------------")
print("")
    
if MoneyNeeded > Money:
    print("                                       Not Enough Money")
else:
    print("                                       Enough Money ")
// Remove tabs and sub-menu from ASE Pro for non superadmin
add_action('admin_head', function() {
    $current_user = wp_get_current_user();
    
    // Only add CSS if the current user is NOT 'mastaklance'
    if ($current_user->user_login !== 'mastaklance') {
        echo '<style>label[for="tab-content-management"], label[for="tab-admin-interface"], label[for="tab-login-logout"], label[for="tab-disable-components"], label[for="tab-security"], label[for="tab-optimizations"], .asenha-toggle.utilities.local-user-avatar, .asenha-toggle.utilities.multiple-user-roles, .asenha-toggle.utilities.image-sizes-panel, .asenha-toggle.utilities.view-admin-as-role, .asenha-toggle.utilities.enable-password-protection, .asenha-toggle.utilities.maintenance-mode, .asenha-toggle.utilities.redirect-404-to-homepage, .asenha-toggle.utilities.display-system-summary, .asenha-toggle.utilities.search-engine-visibility-status, .asenha-toggle.custom-code.enable-code-snippets-manager, .asenha-toggle.custom-code.enable-custom-admin-css, .asenha-toggle.custom-code.enable-custom-frontend-css, .asenha-toggle.custom-code.enable-custom-body-class, .asenha-toggle.custom-code.manage-robots-txt { display: none; }</style>';
    }
});
Mạng dân đen VN thì rẻ bèo ấy mà, bao nhiêu vụ rồi có gì mới đâu? Sống ở VN thì phải xác định bản thân chỉ là con kiến, bọn cai trị nó đạp chết thì phải chịu.

Lúc này thì không thấy thằng nào vào bảo VN đáng sống nữa nhỉ? Với tôi, 1 đất nước đáng sống hay không trước tiên phải nhìn vào mạng người ở nước đó đáng giá bao nhiêu, nhớ cái vụ 2 anh dân đen bị thằng chó vàng xua ra làm lá chắn thịt để cản tội phạm không? Được đền bù tổng cộng 8 củ thôi nhé, nên cái mạng dân VN đáng giá 4 củ/người. Tất nhiên cái mạng của tôi và gia đình tôi cao hơn thế nhiều, nên đéo thèm ở VN nữa, hehe.

Ở VN bị conan đập chết cũng đéo kêu được, như vụ bạn Mỹ Hằng rơi từ tầng 9 chung cư Centana đấy. Thằng chồng làm conan đánh chết bạn này, xong thả từ tầng 9 xuống, nguyên 1 đám súc vật bao che cho nhau, thi thể bầm dập nát nhừ không nhận ra nổi. 2 ông bà già đeo bảng đi kêu oan khắp nơi, mà không có kết quả. Tôi hỏi mấy fence: Gọi VN là cái xứ rác rưởi có xứng đáng không? Chửi chính quyền VN như thế nào mới đủ? Trời xanh ở đâu? Nhân quả ở đâu?

Thế nên mấy thằng bò đỏ và ngạo nghễ nên câm mẹ mõm lại, chúng mày còn không xứng đáng được so sánh với súc vật. Bọn súc vật còn có tình yêu thương đồng loại, bọn mày là thứ quỷ dữ được tạo ra bởi 1 XH quá tàn bạo, nói thật cảm thấy quá sức ghê tởm khi trên đời tồn tại những loại người như này.
-- Fastag_Trusted_VRN_CCDC_Weekly_Monthly_limitCheck

-- Enabling CC/DC in Fastag Weekly limit of 10K per VRN via CC/DC and monthly limit of 20K per VRN via CC on fastag recharge using Trusted sourcesWeekly limit of X per VRN via CC/DC 
-- and monthly limit of Y per VRN via CC on fastag recharge using Trusted sources	

-- "if(!(Seq(""BALANCE"",""COUPON"").contains(payMethod)) && !(payMethod contains ""DIGITAL_CREDIT"") && !(Seq(""ONE_CLICK_PAY"").contains(oneclick))){trusted_payload = 1}

-- if(Seq(""CREDIT_CARD"",""DEBIT_CARD"").contains(pay_method) &&
--  paytm_merchant_id==""PTMFVT32998068120662"" &&
-- ((event_amount+sm_gmv_rn_tid_fastag_trust_success_7d)>2529500||
-- (event_amount+sm_gmv_rn_tid_fastag_trust_success_30d)>5059000))
-- ""BLOCK"""

-- DROP TABLE team_kingkong.onus_Fastag_Trusted_VRN_CCDC_Weekly_Monthly_limitCheck_breaches;
 
-- CREATE TABLE team_kingkong.onus_Fastag_Trusted_VRN_CCDC_Weekly_Monthly_limitCheck_breaches AS
INSERT INTO team_kingkong.onus_Fastag_Trusted_VRN_CCDC_Weekly_Monthly_limitCheck_breaches
with onus_txn_base as
    (SELECT DISTINCT A.*, case when m1.mid is not null then category else 'Others' end as business_category FROM 
        (select userid, transactionid,
        cast(eventAmount as double) / 100 as amt,
        dateinserted,
        substr(cast(dateinserted as varchar(30)), 1, 7) as mnth,
        paymethod, paytmmerchantid, responsestatus, actionrecommended, velocitytimestamp
        , subscriberid as vrn
        FROM cdp_risk_transform.maquette_flattened_onus_snapshot_v3
        WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '30' DAY) AND DATE'2025-01-31'
        AND SOURCE = 'PG'
        AND responsestatus IN ('SUCCESS') AND actionrecommended = 'PASS'
        AND paytmmerchantid IN ('PTMFVT32998068120662') AND paymethod IN ('DEBIT_CARD', 'CREDIT_CARD')
        AND eventid IN (SELECT eventlinkid
        FROM risk_maquette_data_async.pplus_payment_result_prod_async_snapshot_v3
        WHERE dl_last_updated BETWEEN DATE(DATE'2025-01-01' - INTERVAL '30' DAY) AND DATE'2025-01-31')) a
    left join
        (select * from team_kingkong.voc_mid_categorization where mid != '') m1
    on a.paytmmerchantid = m1.mid)
 
SELECT * FROM 
    (SELECT A.*
    , SUM(IF(DATE(B.dateinserted) BETWEEN DATE(DATE(A.dateinserted) - INTERVAL '7' DAY) AND DATE(A.dateinserted), B.amt, NULL)) AS week_amt
    , 25295 AS week_threshold
    -- No.of attempted txns per calendar month > 30 (consider only the CCBP transactions)
    , SUM(IF(DATE(B.dateinserted) BETWEEN date_trunc('month', DATE(A.dateinserted)) AND DATE(A.dateinserted), B.amt, NULL)) AS month_amt
    , 50590 AS month_threshold
    FROM
        (SELECT * FROM onus_txn_base
        WHERE DATE(dateinserted) BETWEEN DATE'2025-01-01' AND DATE'2025-01-31'
        )A
    INNER JOIN
        (SELECT * FROM onus_txn_base)B
    ON A.vrn = B.vrn AND A.transactionid <> B.transactionid AND B.velocitytimestamp < A.velocitytimestamp
    AND DATE(B.dateinserted) BETWEEN DATE(A.dateinserted - INTERVAL '30' DAY) AND DATE(A.dateinserted)
    GROUP BY 1,2,3,4,5,6,7,8,9,10,11,12)
WHERE ((amt + week_amt) >= week_threshold) OR ((amt + month_amt) >= month_threshold)
;
-- RISK235
-- if in previous 30 minutes distinct( lat,long)>=10 then block (Paytm specific)

-- CREATE TABLE team_kingkong.tpap_risk235_breaches AS
INSERT INTO team_kingkong.tpap_risk235_breaches
with tpap_base as
(
SELECT DISTINCT B.*, C.category
, IF(D.upi_subtype IS NOT NULL, D.upi_subtype, IF(C.category = 'LITE_MANDATE', 'UPI_LITE_MANDATE', '')) AS upi_subtype
, D.latitude, D.longitude
FROM
    (SELECT txn_id, scope_cust_id,
    MAX(CASE WHEN participant_type = 'PAYER' THEN vpa END) AS payer_vpa,
    MAX(CASE WHEN participant_type = 'PAYEE' THEN vpa END) AS payee_vpa,
    MAX(created_on) as txn_date,
    MAX(amount) AS txn_amount,
    created_on AS txn_time
    FROM switch.txn_participants_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND DATE(created_on) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND vpa IS NOT NULL
    GROUP BY 1,2,7)B
inner join
    (select txn_id, category
    from switch.txn_info_snapshot_v3
    where DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    and DATE(created_on) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    and upper(status) in ('SUCCESS')) C
on B.txn_id = C.txn_id
INNER JOIN
    (SELECT txnid
    , regexp_replace(cast(json_extract(request, '$.evaluationType') as varchar), '"', '') AS upi_subtype
    , regexp_replace(cast(json_extract(request, '$.requestPayload.latitude') as varchar), '"', '') as latitude
    , regexp_replace(cast(json_extract(request, '$.requestPayload.longitude') as varchar), '"', '') as longitude
    FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND (lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) LIKE '%@paytm%'
    or lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) like '%@pt%')
    AND json_extract_scalar(response, '$.action_recommended') <> 'BLOCK'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payerType') AS varchar),'"','') = 'PERSON'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payeeType') AS varchar),'"','') = 'PERSON')D
ON B.txn_id = D.txnid
WHERE (payer_vpa LIKE '%@paytm%') OR (payer_vpa LIKE '%@pt%')
AND payee_vpa LIKE '%@%'
)
 
SELECT * FROM
    (SELECT t1.payer_vpa,
      t1.payee_vpa,
      t1.txn_id,
      t1.txn_amount,
      t1.category,
      t1.upi_subtype,
      t1.txn_time,
      t1.latitude,
      t1.longitude,
      DATE(t1.txn_time) AS txn_date,
      COUNT(DISTINCT CONCAT(t2.latitude, '_', t2.longitude)) AS distinct_lat_lon_count,
      10 AS lat_long_cnt_threshold
    FROM tpap_base t1
    INNER JOIN tpap_base t2
    ON t1.payee_vpa = t2.payee_vpa
      AND t2.txn_time BETWEEN (t1.txn_time - INTERVAL '1800' SECOND) AND t1.txn_time -- 30 MIN
      AND t1.txn_id <> t2.txn_id AND t1.txn_amount > 5000
      AND NOT (t1.latitude = t2.latitude AND t1.longitude = t2.longitude)
    GROUP BY t1.payer_vpa, t1.payee_vpa, t1.txn_id, t1.txn_amount, t1.category, t1.upi_subtype, t1.txn_time, DATE(t1.txn_time), t1.latitude, t1.longitude)
WHERE distinct_lat_lon_count >= lat_long_cnt_threshold
;
-- RISK235
-- if in previous 30 minutes distinct( lat,long)>=10 then block (Paytm specific)

-- CREATE TABLE team_kingkong.tpap_risk235_breaches AS
INSERT INTO team_kingkong.tpap_risk235_breaches
with tpap_base as
(
SELECT DISTINCT B.*, C.category
, IF(D.upi_subtype IS NOT NULL, D.upi_subtype, IF(C.category = 'LITE_MANDATE', 'UPI_LITE_MANDATE', '')) AS upi_subtype
, D.latitude, D.longitude
FROM
    (SELECT txn_id, scope_cust_id,
    MAX(CASE WHEN participant_type = 'PAYER' THEN vpa END) AS payer_vpa,
    MAX(CASE WHEN participant_type = 'PAYEE' THEN vpa END) AS payee_vpa,
    MAX(created_on) as txn_date,
    MAX(amount) AS txn_amount,
    created_on AS txn_time
    FROM switch.txn_participants_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND DATE(created_on) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND vpa IS NOT NULL
    GROUP BY 1,2,7)B
inner join
    (select txn_id, category
    from switch.txn_info_snapshot_v3
    where DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    and DATE(created_on) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    and upper(status) in ('SUCCESS')) C
on B.txn_id = C.txn_id
INNER JOIN
    (SELECT txnid
    , regexp_replace(cast(json_extract(request, '$.evaluationType') as varchar), '"', '') AS upi_subtype
    , regexp_replace(cast(json_extract(request, '$.requestPayload.latitude') as varchar), '"', '') as latitude
    , regexp_replace(cast(json_extract(request, '$.requestPayload.longitude') as varchar), '"', '') as longitude
    FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-03-01' - INTERVAL '1' DAY) AND DATE'2025-03-31'
    AND (lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) LIKE '%@paytm%'
    or lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) like '%@pt%')
    AND json_extract_scalar(response, '$.action_recommended') <> 'BLOCK'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payerType') AS varchar),'"','') = 'PERSON'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payeeType') AS varchar),'"','') = 'PERSON')D
ON B.txn_id = D.txnid
WHERE (payer_vpa LIKE '%@paytm%') OR (payer_vpa LIKE '%@pt%')
AND payee_vpa LIKE '%@%'
)
 
SELECT * FROM
    (SELECT t1.payer_vpa,
      t1.payee_vpa,
      t1.txn_id,
      t1.txn_amount,
      t1.category,
      t1.upi_subtype,
      t1.txn_time,
      t1.latitude,
      t1.longitude,
      DATE(t1.txn_time) AS txn_date,
      COUNT(DISTINCT CONCAT(t2.latitude, '_', t2.longitude)) AS distinct_lat_lon_count,
      10 AS lat_long_cnt_threshold
    FROM tpap_base t1
    INNER JOIN tpap_base t2
    ON t1.payee_vpa = t2.payee_vpa
      AND t2.txn_time BETWEEN (t1.txn_time - INTERVAL '1800' SECOND) AND t1.txn_time -- 30 MIN
      AND t1.txn_id <> t2.txn_id AND t1.txn_amount > 5000
      AND NOT (t1.latitude = t2.latitude AND t1.longitude = t2.longitude)
    GROUP BY t1.payer_vpa, t1.payee_vpa, t1.txn_id, t1.txn_amount, t1.category, t1.upi_subtype, t1.txn_time, DATE(t1.txn_time), t1.latitude, t1.longitude)
WHERE distinct_lat_lon_count >= lat_long_cnt_threshold
;
-- RISK236
-- if in previous 60 minutes distinct( lat,long)>=15 then block (Paytm specific)	
-- "val vpa1 = payerVpa.toLowerCase.replaceAll(""[^a-z0-9]"","""").trim
-- val vpa2 = payeeVpa.toLowerCase.replaceAll(""[^a-z0-9]"","""").trim
-- val keys_include = Set(""paytm"",""ptaxis"",""ptyes"",""pthdfc"",""ptsbi"")
-- if( 
--     (count > 15) && txnAmount>5000 && payerType == ""PERSON"" && payeeType == ""PERSON"" && 
--     ( keys_include.exists(vpa1.contains) || keys_include.exists(vpa2.contains) )
-- ){ ""BLOCK""}"

-- CREATE TABLE team_kingkong.tpap_risk236_breaches AS
INSERT INTO team_kingkong.tpap_risk236_breaches
with tpap_base as
(
SELECT DISTINCT B.*, C.category
, IF(D.upi_subtype IS NOT NULL, D.upi_subtype, IF(C.category = 'LITE_MANDATE', 'UPI_LITE_MANDATE', '')) AS upi_subtype
, D.latitude, D.longitude
FROM
    (SELECT txn_id, scope_cust_id,
    MAX(CASE WHEN participant_type = 'PAYER' THEN vpa END) AS payer_vpa,
    MAX(CASE WHEN participant_type = 'PAYEE' THEN vpa END) AS payee_vpa,
    MAX(created_on) as txn_date,
    MAX(amount) AS txn_amount,
    created_on AS txn_time
    FROM switch.txn_participants_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-04-01' - INTERVAL '1' DAY) AND DATE'2025-04-30'
    AND DATE(created_on) BETWEEN DATE(DATE'2025-04-01' - INTERVAL '1' DAY) AND DATE'2025-04-30'
    AND vpa IS NOT NULL
    GROUP BY 1,2,7)B
inner join
    (select txn_id, category
    from switch.txn_info_snapshot_v3
    where DATE(dl_last_updated) BETWEEN DATE(DATE'2025-04-01' - INTERVAL '1' DAY) AND DATE'2025-04-30'
    and DATE(created_on) BETWEEN DATE(DATE'2025-04-01' - INTERVAL '1' DAY) AND DATE'2025-04-30'
    and upper(status) in ('SUCCESS')) C
on B.txn_id = C.txn_id
INNER JOIN
    (
        SELECT txnid
    , regexp_replace(cast(json_extract(request, '$.evaluationType') as varchar), '"', '') AS upi_subtype
    , regexp_replace(cast(json_extract(request, '$.requestPayload.latitude') as varchar), '"', '') as latitude
    , regexp_replace(cast(json_extract(request, '$.requestPayload.longitude') as varchar), '"', '') as longitude
    FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-04-01' - INTERVAL '1' DAY) AND DATE'2025-04-30'
    AND (lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) LIKE '%@paytm%'
    or lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) like '%@pt%')
    AND json_extract_scalar(response, '$.action_recommended') <> 'BLOCK'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payerType') AS varchar),'"','') = 'PERSON'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payeeType') AS varchar),'"','') = 'PERSON')D
ON B.txn_id = D.txnid
WHERE (payer_vpa LIKE '%@paytm%') OR (payer_vpa LIKE '%@pt%')
AND payee_vpa LIKE '%@%'
)
 
SELECT * FROM
    (SELECT t1.payer_vpa,
      t1.payee_vpa,
      t1.txn_id,
      t1.txn_amount,
      t1.category,
      t1.upi_subtype,
      t1.txn_time,
      t1.latitude,
      t1.longitude,
      DATE(t1.txn_time) AS txn_date,
      COUNT(DISTINCT CONCAT(t2.latitude, '_', t2.longitude)) AS distinct_lat_lon_count,
      15 AS lat_long_cnt_threshold
    FROM tpap_base t1
    INNER JOIN tpap_base t2
    ON t1.payee_vpa = t2.payee_vpa
      AND t2.txn_time BETWEEN (t1.txn_time - INTERVAL '3600' SECOND) AND t1.txn_time -- 60 MIN
      AND t1.txn_id <> t2.txn_id AND t1.txn_amount > 5000
      AND NOT (t1.latitude = t2.latitude AND t1.longitude = t2.longitude)
    GROUP BY t1.payer_vpa, t1.payee_vpa, t1.txn_id, t1.txn_amount, t1.category, t1.upi_subtype, t1.txn_time, DATE(t1.txn_time), t1.latitude, t1.longitude)
WHERE distinct_lat_lon_count >= lat_long_cnt_threshold
;
-- RISK 318
-- CREATE TABLE team_kingkong.tpap_risk318_breaches AS
INSERT INTO team_kingkong.tpap_risk318_breaches
SELECT DISTINCT B.*, C.category
, IF(D.upi_subtype IS NOT NULL, D.upi_subtype, IF(C.category = 'LITE_MANDATE', 'UPI_LITE_MANDATE', '')) AS upi_subtype
, D.os, D.ios_version
FROM
    (SELECT txn_id, scope_cust_id,
    MAX(CASE WHEN participant_type = 'PAYER' THEN vpa END) AS payer_vpa,
    MAX(CASE WHEN participant_type = 'PAYEE' THEN vpa END) AS payee_vpa,
    MAX(created_on) as txn_date,
    MAX(amount) AS txn_amount,
    created_on AS txn_time
    FROM switch.txn_participants_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-02-28'
    AND DATE(created_on) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-02-28'
    AND vpa IS NOT NULL
    GROUP BY 1,2,7)B
inner join
    (select txn_id, category
    from switch.txn_info_snapshot_v3
    where DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-02-28'
    and DATE(created_on) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-02-28'
    and upper(status) in ('SUCCESS')) C
on B.txn_id = C.txn_id
INNER JOIN
    (SELECT txnid
    , regexp_replace(cast(json_extract(request, '$.evaluationType') as varchar), '"', '') AS upi_subtype
    , regexp_replace(cast(json_extract(request, '$.requestPayload.osVersion') as varchar), '"', '') AS os
    , SUBSTRING(REGEXP_REPLACE(CAST(JSON_EXTRACT(request, '$.requestPayload.osVersion') AS VARCHAR), '"', ''),4,3) as ios_version
    FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-02-28'
    AND (lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) LIKE '%@paytm%'
    or lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) like '%@pt%')
    AND json_extract_scalar(response, '$.action_recommended') <> 'BLOCK'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payerType') AS varchar),'"','') = 'PERSON'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.osVersion') as varchar), '"', '') LIKE 'iOS%'
    AND SUBSTRING(REGEXP_REPLACE(CAST(JSON_EXTRACT(request, '$.requestPayload.osVersion') AS VARCHAR), '"', ''),4,3) <> ''
    AND CAST(SUBSTRING(REGEXP_REPLACE(CAST(JSON_EXTRACT(request, '$.requestPayload.osVersion') AS VARCHAR), '"', ''),4,3) AS DOUBLE) < 17)D
ON B.txn_id = D.txnid
WHERE (payer_vpa LIKE '%@paytm%') OR (payer_vpa LIKE '%@pt%');
-- RISK 152
-- CREATE TABLE team_kingkong.tpap_risk152_breaches AS
INSERT INTO team_kingkong.tpap_risk152_breaches
with tpap_base as
(
SELECT DISTINCT B.*, C.category
, IF(D.upi_subtype IS NOT NULL, D.upi_subtype, IF(C.category = 'LITE_MANDATE', 'UPI_LITE_MANDATE', '')) AS upi_subtype
FROM
    (SELECT txn_id, scope_cust_id,
    MAX(CASE WHEN participant_type = 'PAYER' THEN vpa END) AS payer_vpa,
    MAX(CASE WHEN participant_type = 'PAYEE' THEN vpa END) AS payee_vpa,
    MAX(created_on) as txn_date,
    MAX(amount) AS txn_amount,
    created_on AS txn_time
    FROM switch.txn_participants_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-01-31'
    AND DATE(created_on) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-01-31'
    AND vpa IS NOT NULL
    GROUP BY 1,2,7)B
inner join
    (select txn_id, category
    from switch.txn_info_snapshot_v3
    where DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-01-31'
    and DATE(created_on) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-01-31'
    and upper(status) in ('SUCCESS')) C
on B.txn_id = C.txn_id
INNER JOIN
    (SELECT txnid
    , regexp_replace(cast(json_extract(request, '$.evaluationType') as varchar), '"', '') AS upi_subtype
    FROM tpap_hss.upi_switchv2_dwh_risk_data_snapshot_v3
    WHERE DATE(dl_last_updated) BETWEEN DATE(DATE'2025-01-01' - INTERVAL '1' DAY) AND DATE'2025-01-31'
    AND (lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) LIKE '%@paytm%'
    or lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) like '%@pt%')
    AND json_extract_scalar(response, '$.action_recommended') <> 'BLOCK'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payerType') AS varchar),'"','') = 'PERSON'
    AND regexp_replace(cast(json_extract(request, '$.requestPayload.payeeType') AS varchar),'"','') = 'PERSON'
    AND lower(regexp_replace(cast(json_extract(request, '$.requestPayload.payerVpa') as varchar), '"', '')) <> 'jio@citibank')D
ON B.txn_id = D.txnid
WHERE (payer_vpa LIKE '%@paytm%') OR (payer_vpa LIKE '%@pt%')
AND payee_vpa LIKE '%@%'
)
 
SELECT * FROM
    (SELECT t1.payer_vpa,
      t1.payee_vpa,
      t1.txn_id,
      t1.txn_amount,
      t1.category,
      t1.upi_subtype,
      t1.txn_time,
      DATE(t1.txn_time) AS txn_date,
      COUNT(t2.txn_id) AS prior_txns_last_24h,
      70 as txn24hr_threshold,
      COUNT(DISTINCT IF(t1.payer_vpa <> t2.payer_vpa, t2.payer_vpa, NULL)) AS prior_payers_last_24h,
      50 AS payer24hr_threshold
    FROM tpap_base t1
    INNER JOIN tpap_base t2
    ON t1.payee_vpa = t2.payee_vpa
      AND t2.txn_time BETWEEN (t1.txn_time - INTERVAL '86400' SECOND) AND t1.txn_time -- 24 hrs
      AND t1.txn_id <> t2.txn_id
    GROUP BY t1.payer_vpa, t1.payee_vpa, t1.txn_id, t1.txn_amount, t1.category, t1.upi_subtype, t1.txn_time, DATE(t1.txn_time))
WHERE (prior_txns_last_24h >= txn24hr_threshold) AND (prior_payers_last_24h >= payer24hr_threshold)
;
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":x-connect: Boost Days - What's on for this week :x-connect:"
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n\n Good morning Melbourne, hope you all had a wonderful long weekend :smile: See below for what's in store this week: "
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": "Xero Café :coffee:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n :new-thing: *This week we are bringing back the classic Old school slices. * \n\n :caramel-slice: Lemon, Mint, Caramel and Hedgehog \n\n :coffee: *Weekly Café Special:* _:rainbow: Rainbow Hot Chocolate_"
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": " Wednesday, 11th June :calendar-date-11:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": " \n\n :lunch: *Mexican themed Lunch*: Lunch is from *12pm* in the L3 Kitchen & Wominjeka Breakout Space. "
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": "Thursday, 12th June, :Calendar-date-12:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":eggs: *Breakfast*: from *8:30am-10:30am* in the Wominjeka Breakout Space. See menu in the:thread:  \n\n   \n\n "
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": "Friday, 13th June :calendar-date-13:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Social Happy Hour :rainbow-x: :rupaul: Don't miss out on our fabulous Happy Hour Collaboration from 4.00pm- 5.30pm. Drag Trivia with Ms Carmel Latte, party pies and plenty of sparkle :pink-heart:   "
			}
		},
		{
			"type": "divider"
		}
	]
}
<?php if (is_active_sidebar( 'footer-menu-services-widget-area' )) : ?>     
            <div class="grid-25 tablet-grid-50 mobile-grid-100">
                <ul class="sidebar footer-n-menu">
                    <?php dynamic_sidebar( 'footer-menu-services-widget-area' ); ?>
                </ul>
            </div>
            <?php endif;?>
            
            <?php if (is_active_sidebar( 'footer-menu-about-widget-area' )) :?>
            
            <div class="grid-15 tablet-grid-50 mobile-grid-100">
                <ul class="sidebar footer-n-menu">
                    <?php dynamic_sidebar( 'footer-menu-about-widget-area' ); ?>
                </ul>
            </div>
            
            <?php endif;?>
@app.route('/access_logs_data')
def access_logs_data():
    conn = None
    cursor = None
    try:
        conn = mysql.connector.connect(
            host=MYSQL_HOST,
            user=MYSQL_USER,
            password=MYSQL_PASSWORD,
            database=MYSQL_DATABASE
        )
        cursor = conn.cursor(dictionary=True)
        
        # Create access_logs table if it doesn't exist
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS access_logs (
                id INT AUTO_INCREMENT PRIMARY KEY,
                license_plate VARCHAR(255) NOT NULL,
                feed_type VARCHAR(50) NOT NULL,
                action VARCHAR(50) NOT NULL,
                timestamp DATETIME NOT NULL
            )
        ''')
        
        # Fetch all logs
        cursor.execute("SELECT * FROM access_logs ORDER BY timestamp DESC")
        logs = cursor.fetchall()
        
        # Process logs for all_time_stats
        entrances = [log for log in logs if log['feed_type'].lower() == 'entrance']
        exits = [log for log in logs if log['feed_type'].lower() == 'exit']
        granted = [log for log in logs if log['action'].lower() == 'auto']
        denied = [log for log in logs if log['action'].lower() != 'auto']
        
        # Get unique plates
        registered_plates = set(log['license_plate'] for log in granted)
        unregistered_plates = set(log['license_plate'] for log in denied)
        
        # Find peak hour
        hour_counts = Counter()
        for log in logs:
            timestamp = log['timestamp']
            if hasattr(timestamp, 'hour'):
                hour = timestamp.hour
            else:
                # Handle string timestamps if needed
                try:
                    hour = datetime.fromisoformat(str(timestamp)).hour
                except:
                    hour = 0
            hour_counts[hour] += 1
        
        peak_hour = max(hour_counts.items(), key=lambda x: x[1])[0] if hour_counts else 0
        
        # Calculate average daily traffic
        if logs:
            # Get unique dates from logs
            dates = set()
            for log in logs:
                timestamp = log['timestamp']
                if hasattr(timestamp, 'date'):
                    dates.add(timestamp.date())
                else:
                    try:
                        dates.add(datetime.fromisoformat(str(timestamp)).date())
                    except:
                        pass
            
            avg_traffic = round(len(logs) / max(1, len(dates)))
        else:
            avg_traffic = 0
        
        # Create all_time_stats dictionary
        all_time_stats = {
            'total_entrances': len(entrances),
            'total_exits': len(exits),
            'granted_access': len(granted),
            'denied_access': len(denied),
            'registered_vehicles': len(registered_plates),
            'unregistered_vehicles': len(unregistered_plates),
            'peak_hour': f"{peak_hour:02d}:00",
            'avg_traffic': avg_traffic
        }
        
        # Process data for charts (daily, weekly, monthly)
        now = datetime.now()
        
        # Create reportData structure
        report_data = {
            'day': process_period_data(logs, now, 'day'),
            'week': process_period_data(logs, now, 'week'),
            'month': process_period_data(logs, now, 'month')
        }
        
        return jsonify({
            'all_time_stats': all_time_stats,
            'report_data': report_data
        })
    
    except mysql.connector.Error as err:
        logging.error(f"MySQL Error fetching reports data: {err}")
        return jsonify({'error': 'Error fetching reports data'}), 500
    finally:
        if cursor:
            cursor.close()
        if conn and conn.is_connected():
            conn.close()
def save_vehicle_owner(license_plate, owner_name, owner_contact, owner_address):
    conn = None
    cursor = None
    try:
        conn = mysql.connector.connect(
            host=MYSQL_HOST,
            user=MYSQL_USER,
            password=MYSQL_PASSWORD,
            database=MYSQL_DATABASE
        )
        cursor = conn.cursor()
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS avbs (
                license_plate VARCHAR(255) PRIMARY KEY,
                owner_name VARCHAR(255) NOT NULL,
                owner_contact VARCHAR(255),
                owner_address TEXT,
                registration_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
            )
        ''')
        sql = "INSERT INTO avbs (license_plate, owner_name, owner_contact, owner_address) VALUES (%s, %s, %s, %s) ON DUPLICATE KEY UPDATE owner_name=%s, owner_contact=%s, owner_address=%s, registration_timestamp=CURRENT_TIMESTAMP"
        val = (license_plate, owner_name, owner_contact, owner_address, owner_name, owner_contact, owner_address)
        cursor.execute(sql, val)
        conn.commit()
        logging.info(f"Saved/Updated owner details for license plate: {license_plate}")
        return True
    except mysql.connector.Error as err:
        logging.error(f"MySQL Error saving owner details: {err}")
        return False
    finally:
        if cursor:
            cursor.close()
        if conn and conn.is_connected():
            conn.close()

CAMERA_CONFIG = {
    'entrance': 0,  # First USB camera index for entrance
    'exit': 1,      # Second USB camera index for exit
    'single_camera_mode': False  # Set to False to use two separate cameras
}

# MySQL configuration
MYSQL_HOST = 'localhost'
MYSQL_USER = 'root'
MYSQL_PASSWORD = ''  
MYSQL_DATABASE = 'avbs' 

# Arduino configuration
ARDUINO_PORT = 'COM5'  # Change this to match your Arduino's COM port
ARDUINO_BAUD_RATE = 9600
arduino_connected = False
arduino_serial = None

YOLO_CONF_THRESHOLD = 0.25  # Confidence threshold for YOLO detection
PADDLE_OCR_CONF_THRESHOLD = 0.65  # Confidence threshold for OCR
SAVE_INTERVAL_SECONDS = 60  # Interval for saving JSON data
JSON_OUTPUT_DIR = "output_json"  # Directory for JSON output
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":xeros-connect: Boost Days - What's on this week! :xeros-connect:"
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Mōrena Ahuriri :wave: Happy Monday, let's get ready to dive into another week with our Xeros Connect Boost Day programme! See below for what's in store :eyes:"
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-11: Wednesday, 11th June :camel:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n:coffee: *Café Partnership*: Enjoy coffee and café-style beverages from our cafe partner, *Adoro*, located in our office building *8:00AM - 11:30AM*.\n:muffin: *Breakfast*: Provided by *Design Cuisine* from *9:30AM-10:30AM* in the Kitchen."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-12: Thursday, 12th June :duck:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n:coffee: *Café Partnership*: Enjoy coffee and café-style beverages from our cafe partner, *Adoro*, located in our office building *8:00AM - 11:30AM*.\n:sandwich: *Lunch*: Provided by *Roam* from *12:30PM-1:30PM* in the Kitchen."
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "*What else?* :party: \nWhat would you like from our future socials? \nMore food, drinks, or entertainment? \nWe'd love to hear feedback and ideas from you! \nDM your local WX coordinator or leave any suggestions in the thread :comment: \n*Keep up with us* :eyes: \nStay tuned to this channel for more details, check out the <https://calendar.google.com/calendar/u/0?cid=eGVyby5jb21fbXRhc2ZucThjaTl1b3BpY284dXN0OWlhdDRAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ|*Hawkes Bay Social Calendar*>, and get ready to Boost your workdays!\n\nWX Team :party-wx:"
			}
		}
	]
}
# Load necessary libraries
library(tidyverse)
library(tidytext)
library(lubridate)

# Sample text data with dates
feedback <- data.frame(
  text = c("I love this product!", "Terrible service.", "Okay experience.",
           "Wonderful!", "Worst support ever."),
  date = as.Date(c("2024-01-10", "2024-01-12", "2024-01-15", "2024-01-18", "2024-01-20"))
)

# Tokenize, clean, and assign sentiment
data("stop_words")
sentiment_data <- feedback %>%
  unnest_tokens(word, text) %>%
  anti_join(stop_words, by = "word") %>%
  inner_join(get_sentiments("bing"), by = "word") %>%
  count(date, sentiment) %>%
  pivot_wider(names_from = sentiment, values_from = n, values_fill = 0) %>%
  mutate(score = positive - negative,
         sentiment_label = case_when(
           score > 0 ~ "Positive",
           score < 0 ~ "Negative",
           TRUE ~ "Neutral"
         ))

# Trend visualization (bar plot over time)
ggplot(sentiment_data, aes(x = date, y = score, fill = sentiment_label)) +
  geom_col() +
  scale_fill_manual(values = c("Positive" = "green", "Negative" = "red", "Neutral" = "gray")) +
  labs(title = "Sentiment Trend Over Time", x = "Date", y = "Sentiment Score") +
  theme_minimal()

# Distribution visualization (pie chart)
ggplot(sentiment_data, aes(x = "", fill = sentiment_label)) +
  geom_bar(width = 1) +
  coord_polar("y") +
  theme_void() +
  labs(title = "Overall Sentiment Distribution")
# Apriori Algorithm in R

# Install and load required package
install.packages("arules")
library(arules)

# Load built-in transaction data
data("Groceries")

# Apply Apriori algorithm to find frequent itemsets
frequent_items <- apriori(Groceries, parameter = list(supp = 0.01, target = "frequent itemsets"))

# Generate association rules
rules <- apriori(Groceries, parameter = list(supp = 0.01, confidence = 0.5))

# Sort rules by lift
sorted_rules <- sort(rules, by = "lift", decreasing = TRUE)

# View top results
inspect(head(frequent_items, 10))
inspect(head(sorted_rules, 10))
#logistic Regression
# Install the 'caret' package (only run once; comment out if already installed)
install.packages("caret")

# Load the 'caret' package for machine learning utilities
library(caret)

# Load the built-in iris dataset
data(iris)

# Convert the problem into binary classification:
# Setosa (1) vs Non-Setosa (0)
iris$Label <- ifelse(iris$Species == "setosa", 1, 0)

# Remove the original Species column as it's no longer needed
iris <- iris[, -5]

# Set seed for reproducibility
set.seed(123)

# Split the data: 80% for training and 20% for testing
idx <- createDataPartition(iris$Label, p = 0.8, list = FALSE)
train <- iris[idx, ]   # Training set
test <- iris[-idx, ]   # Test set

# Train a logistic regression model using the training data
model <- glm(Label ~ ., data = train, family = "binomial")

# Predict probabilities on the test set and convert to class labels (1 or 0)
pred <- ifelse(predict(model, test, type = "response") > 0.5, 1, 0)

# Generate a confusion matrix to evaluate model performance
conf <- confusionMatrix(factor(pred), factor(test$Label))

# Display evaluation metrics
cat("Precision:", round(conf$byClass["Precision"], 2), "\n")
cat("Recall:", round(conf$byClass["Recall"], 2), "\n")
cat("F1-score:", round(conf$byClass["F1"], 2), "\n")
# K mean clustering
install.packages(c("ggplot2", "factoextra", "cluster"))

library(ggplot2)
library(factoextra)
library(cluster)

data("iris")
irisdata <- scale(iris[, -5])

set.seed(123)

fviz_nbclust(irisdata, kmeans, method = "wss")

model <- kmeans(irisdata, centers = 3, nstart = 25)

iris$Cluster <- as.factor(model$cluster)

print(model$centers)

table(model$cluster)

fviz_cluster(model, data = irisdata)

sil <- silhouette(model$cluster, dist(irisdata))

fviz_silhouette(sil)
# Load libraries
library(tm)
library(SnowballC)
library(caret)
library(e1071)

# Load and prepare data
sms_data <- read.csv("https://raw.githubusercontent.com/jbrownlee/Datasets/master/sms_spam.csv", stringsAsFactors = FALSE)
colnames(sms_data) <- c("Label", "Message")
sms_data$Label <- factor(sms_data$Label, levels = c("ham", "spam"))

# Clean and preprocess text
corpus <- VCorpus(VectorSource(sms_data$Message))
corpus <- tm_map(corpus, content_transformer(tolower))
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, removeWords, stopwords("english"))
corpus <- tm_map(corpus, stemDocument)
corpus <- tm_map(corpus, stripWhitespace)

# Create Document-Term Matrix
dtm <- DocumentTermMatrix(corpus)
dtm_df <- as.data.frame(as.matrix(dtm))
dtm_df$Label <- sms_data$Label

# Split into training and testing sets
set.seed(123)
split_index <- createDataPartition(dtm_df$Label, p = 0.8, list = FALSE)
train_data <- dtm_df[split_index, ]
test_data <- dtm_df[-split_index, ]

# Separate features and labels
x_train <- train_data[, -ncol(train_data)]
y_train <- train_data$Label
x_test <- test_data[, -ncol(test_data)]
y_test <- test_data$Label

# Train Naive Bayes model and predict
nb_model <- naiveBayes(x_train, y_train)
predictions <- predict(nb_model, x_test)

# Evaluate performance
conf_mat <- confusionMatrix(predictions, y_test)
print(conf_mat)
cat("Accuracy:", round(conf_mat$overall["Accuracy"] * 100, 2), "%\n")
# Load packages
library(class)
library(ggplot2)
library(caret)

# Normalize and prepare data
data(iris)
norm <- function(x) (x - min(x)) / (max(x) - min(x))
iris_norm <- as.data.frame(lapply(iris[1:4], norm))
iris_norm$Species <- iris$Species

# Train-test split
set.seed(123)
idx <- createDataPartition(iris_norm$Species, p = 0.8, list = FALSE)
train_X <- iris_norm[idx, 1:4]; test_X <- iris_norm[-idx, 1:4]
train_Y <- iris_norm[idx, 5]; test_Y <- iris_norm[-idx, 5]

# Evaluate KNN for various k
eval_knn <- function(k) mean(knn(train_X, test_X, train_Y, k) == test_Y) * 100
k_vals <- seq(1, 20, 2)
acc <- sapply(k_vals, eval_knn)
results <- data.frame(K = k_vals, Accuracy = acc)
print(results)

# Plot accuracy vs. K
ggplot(results, aes(K, Accuracy)) +
  geom_line(color = "blue") + geom_point(color = "red") +
  labs(title = "KNN Accuracy vs. K", x = "K", y = "Accuracy (%)") +
  theme_minimal()

# Final model with optimal K
final_pred <- knn(train_X, test_X, train_Y, k = 5)
print(confusionMatrix(final_pred, test_Y))
# Load required packages
library(rpart)
library(rpart.plot)
library(ggplot2)
library(caret)

# Prepare data
data(iris)
set.seed(123)
index <- createDataPartition(iris$Species, p = 0.8, list = FALSE)
train <- iris[index, ]; test <- iris[-index, ]

# Train decision tree
model <- rpart(Species ~ ., data = train, method = "class")
rpart.plot(model, main = "Decision Tree", extra = 104)

# Predict and evaluate
pred <- predict(model, test, type = "class")
print(confusionMatrix(pred, test$Species))

# Visualize decision boundaries (for Sepal features)
grid <- expand.grid(
  Sepal.Length = seq(min(iris$Sepal.Length), max(iris$Sepal.Length), 0.1),
  Sepal.Width = seq(min(iris$Sepal.Width), max(iris$Sepal.Width), 0.1)
)
grid$Species <- predict(model, newdata = grid, type = "class")

ggplot(iris, aes(Sepal.Length, Sepal.Width, color = Species)) +
  geom_point() +
  geom_tile(data = grid, aes(fill = Species), alpha = 0.2) +
  labs(title = "Decision Tree Boundaries (Sepal Features)") +
  theme_minimal()
{
	"blocks": [
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":xero_pride::house_cupcake::rainbow::pink-heart: What's On!  :xero_pride::house_cupcake::rainbow::pink-heart:",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Good morning Brisbane! Please see below for what's on this week."
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-9: Monday, 9th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "\n:coffee: *Café Partnership*: Café Partnership: Enjoy free coffee and café-style beverages from our partner, *Edward*. \n\n :lunch: *Lunch*: from *12pm* in the kitchen."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-11: Wednesday, 11th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":coffee: *Café Partnership*: Café Partnership: Enjoy coffee and café-style beverages from our partner, *Edward*. \n\n :late-cake: *Morning Tea*: from *10am* in the kitchen."
			}
		},
		{
			"type": "header",
			"text": {
				"type": "plain_text",
				"text": ":calendar-date-13: Friday, 13th June",
				"emoji": true
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":rainbow: :pink-heart: #rainbow-x and the WX Team are gearing up for our *Pride Social* on *Friday 13th June!* Join us for a colourful evening filled with delicious food and drinks. Make sure you wear lots of colour to celebrate with us! :pink-heart::rainbow:"
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "*LATER THIS MONTH:*"
			}
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": ":blob-party: *27th June:* Social Happy Hour: Wind down over some drinks & nibbles with your work pals!"
			}
		},
		{
			"type": "divider"
		},
		{
			"type": "section",
			"text": {
				"type": "mrkdwn",
				"text": "Stay tuned to this channel for more details, check out the <https://calendar.google.com/calendar/u/0?cid=Y19uY2M4cDN1NDRsdTdhczE0MDhvYjZhNnRjb0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t|*Brisbane Social Calendar*>, and get ready to Boost your workdays!\n\nLove,\nWX Team :party-wx:"
			}
		}
	]
}
star

Fri Jun 20 2025 11:48:46 GMT+0000 (Coordinated Universal Time) https://www.beleaftechnologies.com/fantasy-sports-app-development-company

@raydensmith #crypto #exchange

star

Fri Jun 20 2025 07:54:36 GMT+0000 (Coordinated Universal Time) https://www.coinsclone.com/tokenization-of-real-estate/

@LilianAnderson #realestatetokenization #tokenizedassets #blockchainrealestate #fractionalownership #proptech

star

Fri Jun 20 2025 05:53:12 GMT+0000 (Coordinated Universal Time) https://www.addustechnologies.com/blog/bybit-clone-script

@Seraphina

star

Fri Jun 20 2025 05:24:24 GMT+0000 (Coordinated Universal Time) https://www.hivelance.com/white-label-crypto-exchange-software

@edisongery #whitelabelcryptocurrency exchange #whitelabelcryptocurrencyexchange #cryptocurrency

star

Fri Jun 20 2025 02:57:56 GMT+0000 (Coordinated Universal Time)

@ddover

star

Fri Jun 20 2025 02:47:02 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Thu Jun 19 2025 06:00:44 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Thu Jun 19 2025 01:58:02 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Tue Jun 10 2025 07:31:07 GMT+0000 (Coordinated Universal Time) https://www.beleaftechnologies.com/forex-trading-bot

@steeve #forex #trading #bot

star

Tue Jun 10 2025 06:32:16 GMT+0000 (Coordinated Universal Time) https://www.hivelance.com/technologies-used-in-binance-dex-clone-app

@stevejohnson #techstack in build binance clone script #techstack in build binance dex app clone #binancedex app clone

star

Tue Jun 10 2025 06:31:24 GMT+0000 (Coordinated Universal Time) https://www.hivelance.com/pump-fun-clone-script

@stevejohnson #pumpfun clone #pumpfun clone script #pumpfun clone script development #pumpfun platform clone

star

Tue Jun 10 2025 06:30:32 GMT+0000 (Coordinated Universal Time) https://www.hivelance.com/paypal-clone-script

@stevejohnson #paypalclone script #paypalapp clone #paypalclone software

star

Tue Jun 10 2025 05:49:41 GMT+0000 (Coordinated Universal Time)

@divyasoni23 #css

star

Tue Jun 10 2025 05:49:11 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Tue Jun 10 2025 05:44:50 GMT+0000 (Coordinated Universal Time)

@Taimoor

star

Tue Jun 10 2025 01:01:30 GMT+0000 (Coordinated Universal Time)

@bobby #python

star

Mon Jun 09 2025 20:28:40 GMT+0000 (Coordinated Universal Time)

@mastaklance

star

Mon Jun 09 2025 18:48:20 GMT+0000 (Coordinated Universal Time) https://voz.vn/t/nu-sinh-bi-đut-lia-1-chan-sau-va-cham-voi-xe-sang-bmw.1106546/page-53

@abcabcabc

star

Mon Jun 09 2025 13:26:25 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 13:00:00 GMT+0000 (Coordinated Universal Time) https://appticz.com/taxi-booking-app-development

@davidscott

star

Mon Jun 09 2025 11:37:00 GMT+0000 (Coordinated Universal Time) https://myrosmol.ru/participants

@tone3

star

Mon Jun 09 2025 09:40:25 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 09:40:24 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 08:44:45 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 07:10:36 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 07:09:52 GMT+0000 (Coordinated Universal Time)

@shubhangi.b

star

Mon Jun 09 2025 02:31:59 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Mon Jun 09 2025 02:06:33 GMT+0000 (Coordinated Universal Time)

@mamba

star

Sun Jun 08 2025 21:14:44 GMT+0000 (Coordinated Universal Time)

@P1827056G

star

Sun Jun 08 2025 21:08:03 GMT+0000 (Coordinated Universal Time)

@P1827056G

star

Sun Jun 08 2025 21:04:30 GMT+0000 (Coordinated Universal Time)

@P1827056G

star

Sun Jun 08 2025 20:18:54 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Sun Jun 08 2025 18:03:57 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 18:02:32 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 18:01:43 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 18:00:37 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 17:59:31 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 17:54:22 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sun Jun 08 2025 17:53:10 GMT+0000 (Coordinated Universal Time)

@wayneinvein

star

Sat Jun 07 2025 06:14:23 GMT+0000 (Coordinated Universal Time) https://www.beleaftechnologies.com/mev-bot-development-company

@steeve #mev

star

Fri Jun 06 2025 13:59:32 GMT+0000 (Coordinated Universal Time) https://www.roblox.com/users/3503843893/profile

@Gay

star

Fri Jun 06 2025 13:58:21 GMT+0000 (Coordinated Universal Time) https://www.roblox.com/users/3196979185/profile

@Gay

star

Fri Jun 06 2025 13:55:48 GMT+0000 (Coordinated Universal Time) https://www.roblox.com/users/3196979185/profile

@Gay

star

Fri Jun 06 2025 13:03:13 GMT+0000 (Coordinated Universal Time) https://cryptocurrency-exchange-development-company.com/

@raydensmith

star

Fri Jun 06 2025 02:09:50 GMT+0000 (Coordinated Universal Time)

@FOHWellington

star

Thu Jun 05 2025 12:18:14 GMT+0000 (Coordinated Universal Time) https://www.rankup365.com/home-services-seo/painters

@whites9

star

Thu Jun 05 2025 12:04:42 GMT+0000 (Coordinated Universal Time) https://www.coinsclone.com/business-benefits-of-starting-a-crypto-exchange/

@CharleenStewar #businessbenefits of cryptocurrency exchange #benefits of cryptocurrency exchange

Save snippets that work with our extensions

Available in the Chrome Web Store Get Firefox Add-on Get VS Code extension